Jan 09 13:30:19 crc systemd[1]: Starting Kubernetes Kubelet... Jan 09 13:30:19 crc restorecon[4692]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:19 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:20 crc restorecon[4692]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 13:30:20 crc restorecon[4692]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 09 13:30:20 crc kubenswrapper[4919]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 13:30:20 crc kubenswrapper[4919]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 09 13:30:20 crc kubenswrapper[4919]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 13:30:20 crc kubenswrapper[4919]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 13:30:20 crc kubenswrapper[4919]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 09 13:30:20 crc kubenswrapper[4919]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.579564 4919 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582563 4919 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582581 4919 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582587 4919 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582592 4919 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582603 4919 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582609 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582615 4919 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582621 4919 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582625 4919 feature_gate.go:330] unrecognized feature gate: Example Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582629 4919 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582634 4919 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582638 4919 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582642 4919 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582645 4919 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582649 4919 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582653 4919 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582657 4919 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582661 4919 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582665 4919 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582669 4919 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582673 4919 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582677 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582680 4919 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582684 4919 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582688 4919 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582692 4919 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582696 4919 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582700 4919 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582704 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582708 4919 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582712 4919 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582716 4919 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582720 4919 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582724 4919 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582728 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582732 4919 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582737 4919 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582741 4919 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582745 4919 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582751 4919 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582755 4919 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582760 4919 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582765 4919 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582770 4919 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582774 4919 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582779 4919 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582783 4919 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582788 4919 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582794 4919 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582798 4919 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582802 4919 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582806 4919 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582810 4919 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582814 4919 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582817 4919 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582821 4919 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582825 4919 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582828 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582832 4919 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582835 4919 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582840 4919 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582844 4919 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582848 4919 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582851 4919 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582855 4919 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582858 4919 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582863 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582867 4919 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582870 4919 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582874 4919 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.582877 4919 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.582955 4919 flags.go:64] FLAG: --address="0.0.0.0" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.582964 4919 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.582971 4919 flags.go:64] FLAG: --anonymous-auth="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.582977 4919 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.582982 4919 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.582987 4919 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.582992 4919 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.582998 4919 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583002 4919 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583006 4919 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583012 4919 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583016 4919 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583021 4919 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583026 4919 flags.go:64] FLAG: --cgroup-root="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583030 4919 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583035 4919 flags.go:64] FLAG: --client-ca-file="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583042 4919 flags.go:64] FLAG: --cloud-config="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583048 4919 flags.go:64] FLAG: --cloud-provider="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583053 4919 flags.go:64] FLAG: --cluster-dns="[]" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583059 4919 flags.go:64] FLAG: --cluster-domain="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583064 4919 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583068 4919 flags.go:64] FLAG: --config-dir="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583072 4919 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583077 4919 flags.go:64] FLAG: --container-log-max-files="5" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583084 4919 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583088 4919 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583093 4919 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583097 4919 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583101 4919 flags.go:64] FLAG: --contention-profiling="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583105 4919 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583109 4919 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583113 4919 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583119 4919 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583125 4919 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583130 4919 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583134 4919 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583138 4919 flags.go:64] FLAG: --enable-load-reader="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583142 4919 flags.go:64] FLAG: --enable-server="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583146 4919 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583151 4919 flags.go:64] FLAG: --event-burst="100" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583156 4919 flags.go:64] FLAG: --event-qps="50" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583160 4919 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583164 4919 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583168 4919 flags.go:64] FLAG: --eviction-hard="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583174 4919 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583178 4919 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583182 4919 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583186 4919 flags.go:64] FLAG: --eviction-soft="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583190 4919 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583195 4919 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583199 4919 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583203 4919 flags.go:64] FLAG: --experimental-mounter-path="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583221 4919 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583226 4919 flags.go:64] FLAG: --fail-swap-on="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583230 4919 flags.go:64] FLAG: --feature-gates="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583236 4919 flags.go:64] FLAG: --file-check-frequency="20s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583267 4919 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583271 4919 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583276 4919 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583280 4919 flags.go:64] FLAG: --healthz-port="10248" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583284 4919 flags.go:64] FLAG: --help="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583288 4919 flags.go:64] FLAG: --hostname-override="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583292 4919 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583297 4919 flags.go:64] FLAG: --http-check-frequency="20s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583302 4919 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583306 4919 flags.go:64] FLAG: --image-credential-provider-config="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583310 4919 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583314 4919 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583319 4919 flags.go:64] FLAG: --image-service-endpoint="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583323 4919 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583327 4919 flags.go:64] FLAG: --kube-api-burst="100" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583331 4919 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583335 4919 flags.go:64] FLAG: --kube-api-qps="50" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583339 4919 flags.go:64] FLAG: --kube-reserved="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583343 4919 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583347 4919 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583351 4919 flags.go:64] FLAG: --kubelet-cgroups="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583355 4919 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583359 4919 flags.go:64] FLAG: --lock-file="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583363 4919 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583367 4919 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583371 4919 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583378 4919 flags.go:64] FLAG: --log-json-split-stream="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583382 4919 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583387 4919 flags.go:64] FLAG: --log-text-split-stream="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583391 4919 flags.go:64] FLAG: --logging-format="text" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583395 4919 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583399 4919 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583403 4919 flags.go:64] FLAG: --manifest-url="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583407 4919 flags.go:64] FLAG: --manifest-url-header="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583412 4919 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583417 4919 flags.go:64] FLAG: --max-open-files="1000000" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583421 4919 flags.go:64] FLAG: --max-pods="110" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583425 4919 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583430 4919 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583434 4919 flags.go:64] FLAG: --memory-manager-policy="None" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583439 4919 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583443 4919 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583448 4919 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583452 4919 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583461 4919 flags.go:64] FLAG: --node-status-max-images="50" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583465 4919 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583469 4919 flags.go:64] FLAG: --oom-score-adj="-999" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583473 4919 flags.go:64] FLAG: --pod-cidr="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583477 4919 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583484 4919 flags.go:64] FLAG: --pod-manifest-path="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583488 4919 flags.go:64] FLAG: --pod-max-pids="-1" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583492 4919 flags.go:64] FLAG: --pods-per-core="0" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583496 4919 flags.go:64] FLAG: --port="10250" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583500 4919 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583504 4919 flags.go:64] FLAG: --provider-id="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583508 4919 flags.go:64] FLAG: --qos-reserved="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583512 4919 flags.go:64] FLAG: --read-only-port="10255" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583517 4919 flags.go:64] FLAG: --register-node="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583521 4919 flags.go:64] FLAG: --register-schedulable="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583525 4919 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583533 4919 flags.go:64] FLAG: --registry-burst="10" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583537 4919 flags.go:64] FLAG: --registry-qps="5" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583541 4919 flags.go:64] FLAG: --reserved-cpus="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583545 4919 flags.go:64] FLAG: --reserved-memory="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583550 4919 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583554 4919 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583558 4919 flags.go:64] FLAG: --rotate-certificates="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583562 4919 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583566 4919 flags.go:64] FLAG: --runonce="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583570 4919 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583574 4919 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583579 4919 flags.go:64] FLAG: --seccomp-default="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583583 4919 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583588 4919 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583592 4919 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583597 4919 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583601 4919 flags.go:64] FLAG: --storage-driver-password="root" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583605 4919 flags.go:64] FLAG: --storage-driver-secure="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583609 4919 flags.go:64] FLAG: --storage-driver-table="stats" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583613 4919 flags.go:64] FLAG: --storage-driver-user="root" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583617 4919 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583621 4919 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583626 4919 flags.go:64] FLAG: --system-cgroups="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583630 4919 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583636 4919 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583641 4919 flags.go:64] FLAG: --tls-cert-file="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583646 4919 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583651 4919 flags.go:64] FLAG: --tls-min-version="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583655 4919 flags.go:64] FLAG: --tls-private-key-file="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583659 4919 flags.go:64] FLAG: --topology-manager-policy="none" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583663 4919 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583668 4919 flags.go:64] FLAG: --topology-manager-scope="container" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583673 4919 flags.go:64] FLAG: --v="2" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583680 4919 flags.go:64] FLAG: --version="false" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583685 4919 flags.go:64] FLAG: --vmodule="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583691 4919 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.583696 4919 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583800 4919 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583806 4919 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583810 4919 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583814 4919 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583819 4919 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583822 4919 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583826 4919 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583830 4919 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583834 4919 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583840 4919 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583845 4919 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583849 4919 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583853 4919 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583857 4919 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583861 4919 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583864 4919 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583868 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583872 4919 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583876 4919 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583880 4919 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583883 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583887 4919 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583892 4919 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583896 4919 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583901 4919 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583905 4919 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583910 4919 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583913 4919 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583917 4919 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583921 4919 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583924 4919 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583928 4919 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583933 4919 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583937 4919 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583942 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583945 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583949 4919 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583953 4919 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583957 4919 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583960 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583969 4919 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583973 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583977 4919 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583981 4919 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583984 4919 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583988 4919 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583992 4919 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583995 4919 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.583999 4919 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584002 4919 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584006 4919 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584009 4919 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584013 4919 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584020 4919 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584024 4919 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584027 4919 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584031 4919 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584035 4919 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584038 4919 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584042 4919 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584046 4919 feature_gate.go:330] unrecognized feature gate: Example Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584049 4919 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584053 4919 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584057 4919 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584060 4919 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584064 4919 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584067 4919 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584072 4919 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584076 4919 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584080 4919 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.584084 4919 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.584091 4919 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.593880 4919 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.593941 4919 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594073 4919 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594095 4919 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594107 4919 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594118 4919 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594127 4919 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594135 4919 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594145 4919 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594153 4919 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594161 4919 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594171 4919 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594180 4919 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594188 4919 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594199 4919 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594238 4919 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594247 4919 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594256 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594264 4919 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594273 4919 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594280 4919 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594288 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594297 4919 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594304 4919 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594312 4919 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594320 4919 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594328 4919 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594336 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594344 4919 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594355 4919 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594364 4919 feature_gate.go:330] unrecognized feature gate: Example Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594401 4919 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594411 4919 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594421 4919 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594429 4919 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594438 4919 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594445 4919 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594453 4919 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594461 4919 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594468 4919 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594477 4919 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594485 4919 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594494 4919 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594503 4919 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594541 4919 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594550 4919 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594558 4919 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594566 4919 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594574 4919 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594582 4919 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594589 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594597 4919 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594608 4919 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594622 4919 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594632 4919 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594642 4919 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594652 4919 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594661 4919 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594669 4919 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594678 4919 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594687 4919 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594695 4919 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594703 4919 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594712 4919 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594720 4919 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594729 4919 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594738 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594745 4919 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594753 4919 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594761 4919 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594769 4919 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594776 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.594787 4919 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.594800 4919 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595061 4919 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595075 4919 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595085 4919 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595093 4919 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595101 4919 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595109 4919 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595117 4919 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595125 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595133 4919 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595141 4919 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595148 4919 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595156 4919 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595163 4919 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595171 4919 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595179 4919 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595187 4919 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595194 4919 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595202 4919 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595235 4919 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595246 4919 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595256 4919 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595266 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595275 4919 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595284 4919 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595292 4919 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595301 4919 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595309 4919 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595318 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595327 4919 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595335 4919 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595342 4919 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595350 4919 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595357 4919 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595365 4919 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595374 4919 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595384 4919 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595393 4919 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595400 4919 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595408 4919 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595416 4919 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595424 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595433 4919 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595441 4919 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595449 4919 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595458 4919 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595465 4919 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595473 4919 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595481 4919 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595489 4919 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595497 4919 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595505 4919 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595512 4919 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595520 4919 feature_gate.go:330] unrecognized feature gate: Example Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595529 4919 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595537 4919 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595547 4919 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595556 4919 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595566 4919 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595575 4919 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595583 4919 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595591 4919 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595599 4919 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595607 4919 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595617 4919 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595625 4919 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595633 4919 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595641 4919 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595648 4919 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595656 4919 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595666 4919 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.595675 4919 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.595688 4919 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.596651 4919 server.go:940] "Client rotation is on, will bootstrap in background" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.602369 4919 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.602579 4919 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.603257 4919 server.go:997] "Starting client certificate rotation" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.603337 4919 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.603859 4919 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-28 16:47:43.238164204 +0000 UTC Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.604001 4919 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.609929 4919 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.612136 4919 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.612621 4919 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.620670 4919 log.go:25] "Validated CRI v1 runtime API" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.639102 4919 log.go:25] "Validated CRI v1 image API" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.645050 4919 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.647465 4919 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-09-13-26-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.647497 4919 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.660650 4919 manager.go:217] Machine: {Timestamp:2026-01-09 13:30:20.659506282 +0000 UTC m=+0.207345742 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4cea77be-9aeb-4181-a0b4-b60e5a362fd9 BootID:a043b745-924a-464c-80aa-f4df877f55bf Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8c:fb:52 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8c:fb:52 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7b:94:59 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1a:d1:73 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a7:dd:b4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:72:58:2f Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:f9:69:dd:dd:a2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:f3:4d:d6:9a:e4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.660866 4919 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.661053 4919 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.661778 4919 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.661953 4919 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.661985 4919 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.663943 4919 topology_manager.go:138] "Creating topology manager with none policy" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.663973 4919 container_manager_linux.go:303] "Creating device plugin manager" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.664608 4919 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.665012 4919 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.665424 4919 state_mem.go:36] "Initialized new in-memory state store" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.666107 4919 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.667305 4919 kubelet.go:418] "Attempting to sync node with API server" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.667347 4919 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.667390 4919 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.667416 4919 kubelet.go:324] "Adding apiserver pod source" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.667436 4919 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.669071 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.669082 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.669148 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.669206 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.669958 4919 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.670422 4919 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.671380 4919 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.671979 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672009 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672018 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672027 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672041 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672051 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672063 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672078 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672088 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672097 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672110 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672118 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.672517 4919 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.673084 4919 server.go:1280] "Started kubelet" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.673597 4919 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.673605 4919 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.675172 4919 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 09 13:30:20 crc systemd[1]: Started Kubernetes Kubelet. Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.676009 4919 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.677036 4919 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.677110 4919 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.677186 4919 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:05:46.068863587 +0000 UTC Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.677563 4919 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.677664 4919 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.677833 4919 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.678180 4919 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.678525 4919 server.go:460] "Adding debug handlers to kubelet server" Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.678602 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="200ms" Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.678989 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.679101 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.677255 4919 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1889132eda6415e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-09 13:30:20.673045986 +0000 UTC m=+0.220885436,LastTimestamp:2026-01-09 13:30:20.673045986 +0000 UTC m=+0.220885436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.682502 4919 factory.go:55] Registering systemd factory Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.682533 4919 factory.go:221] Registration of the systemd container factory successfully Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.684601 4919 factory.go:153] Registering CRI-O factory Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.684629 4919 factory.go:221] Registration of the crio container factory successfully Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.684729 4919 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.684762 4919 factory.go:103] Registering Raw factory Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.684790 4919 manager.go:1196] Started watching for new ooms in manager Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.685776 4919 manager.go:319] Starting recovery of all containers Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.695111 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.695385 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.695469 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.695583 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.695667 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.695751 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.695826 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.695909 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.695994 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.696070 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.696144 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.696234 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.696321 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.696445 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.696549 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.696633 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.696752 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.696902 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.696985 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697069 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697145 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697241 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697365 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697445 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697539 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697622 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697706 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697785 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697866 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.697944 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.698007 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701111 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701139 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701161 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701180 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701202 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701248 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701266 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701286 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701303 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701325 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701344 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701364 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701384 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701404 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701423 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701446 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701468 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701499 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701538 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701559 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701579 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701612 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701638 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701663 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701686 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701710 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701733 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701757 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701843 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701864 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701886 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701907 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701928 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701948 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701972 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.701994 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702015 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702091 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702122 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702144 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702168 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702188 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702257 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702282 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702303 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702323 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702344 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702364 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702385 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702405 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702424 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702443 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702462 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702485 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702507 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702530 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702549 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702596 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702618 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702637 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702656 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702676 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702699 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702761 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702787 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702807 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702825 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702846 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702865 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702886 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702907 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702928 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702948 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702975 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.702998 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703020 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703041 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703064 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703089 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703114 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703166 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703186 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703208 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703252 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703274 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703293 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703313 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703333 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703353 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703373 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703392 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703413 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703432 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703453 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703474 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703493 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703517 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703537 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703558 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703578 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703599 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703621 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703640 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703672 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703694 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703715 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703734 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703751 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703770 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703791 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703809 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703830 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703851 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703870 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703889 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703909 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703931 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703949 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703972 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.703992 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.704012 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.704036 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.704991 4919 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705041 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705068 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705090 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705109 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705130 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705153 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705174 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705194 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705239 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705258 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705278 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705300 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705321 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705340 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705361 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705381 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705400 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705421 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705441 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705465 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705489 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705511 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705532 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705553 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705573 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705592 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705611 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705632 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705652 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705675 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705695 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705714 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705738 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705758 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705779 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705801 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705821 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705844 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705864 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705884 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705906 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705926 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705945 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705966 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.705987 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706008 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706028 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706048 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706068 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706090 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706112 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706131 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706153 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706173 4919 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706190 4919 reconstruct.go:97] "Volume reconstruction finished" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.706205 4919 reconciler.go:26] "Reconciler: start to sync state" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.727677 4919 manager.go:324] Recovery completed Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.739378 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.742125 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.742469 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.742525 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.743900 4919 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.744078 4919 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.744763 4919 state_mem.go:36] "Initialized new in-memory state store" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.748085 4919 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.750293 4919 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.750364 4919 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.750398 4919 kubelet.go:2335] "Starting kubelet main sync loop" Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.750456 4919 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 09 13:30:20 crc kubenswrapper[4919]: W0109 13:30:20.751142 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.751235 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.763364 4919 policy_none.go:49] "None policy: Start" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.765437 4919 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.765482 4919 state_mem.go:35] "Initializing new in-memory state store" Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.778516 4919 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.818282 4919 manager.go:334] "Starting Device Plugin manager" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.818384 4919 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.818402 4919 server.go:79] "Starting device plugin registration server" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.819070 4919 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.819088 4919 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.819888 4919 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.819985 4919 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.819993 4919 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.828156 4919 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.851432 4919 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.851523 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.852669 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.852705 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.852718 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.852842 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.853245 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.853337 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.853558 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.853579 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.853590 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.853721 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.854255 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.854420 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.857544 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.857576 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.857587 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.857745 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.857762 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.857772 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.858286 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.859189 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.859656 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.859685 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.859694 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.859844 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.860637 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.860662 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.860673 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.860868 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.860888 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.860898 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.861055 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.861291 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.861392 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.861780 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.861808 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.861819 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.861954 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.861979 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.862415 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.862496 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.862555 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.863413 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.863434 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.863442 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.879374 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="400ms" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.908490 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.908640 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.908811 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.908929 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.909032 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.909272 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.909382 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.909477 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.909584 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.909751 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.909876 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.909992 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.910106 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.910232 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.910367 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.919340 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.920703 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.920765 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.920786 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:20 crc kubenswrapper[4919]: I0109 13:30:20.920827 4919 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 13:30:20 crc kubenswrapper[4919]: E0109 13:30:20.921447 4919 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011460 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011518 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011550 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011574 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011609 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011636 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011659 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011684 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011743 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011788 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011792 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011843 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011846 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011724 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011885 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011906 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011919 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011979 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.011981 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012009 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012036 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012061 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012134 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012171 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012251 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012277 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012325 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012348 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012417 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.012489 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.122234 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.123241 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.123268 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.123278 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.123305 4919 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 13:30:21 crc kubenswrapper[4919]: E0109 13:30:21.123543 4919 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.179246 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.184921 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.203150 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: W0109 13:30:21.214702 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d718181167aae3d66f7ead289e08ee74f367dfe49997cd4c34956e59b0708b67 WatchSource:0}: Error finding container d718181167aae3d66f7ead289e08ee74f367dfe49997cd4c34956e59b0708b67: Status 404 returned error can't find the container with id d718181167aae3d66f7ead289e08ee74f367dfe49997cd4c34956e59b0708b67 Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.223730 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.228693 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:21 crc kubenswrapper[4919]: W0109 13:30:21.257377 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-47b550392b2aa21fbb5cbda8964b64b2bc7c72805a787e5c6983cdfcb0389d5a WatchSource:0}: Error finding container 47b550392b2aa21fbb5cbda8964b64b2bc7c72805a787e5c6983cdfcb0389d5a: Status 404 returned error can't find the container with id 47b550392b2aa21fbb5cbda8964b64b2bc7c72805a787e5c6983cdfcb0389d5a Jan 09 13:30:21 crc kubenswrapper[4919]: W0109 13:30:21.261181 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-117e615f318854e99fbea99b3dd84ad35f325f8a91570042810f6f1251992c7d WatchSource:0}: Error finding container 117e615f318854e99fbea99b3dd84ad35f325f8a91570042810f6f1251992c7d: Status 404 returned error can't find the container with id 117e615f318854e99fbea99b3dd84ad35f325f8a91570042810f6f1251992c7d Jan 09 13:30:21 crc kubenswrapper[4919]: E0109 13:30:21.280326 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="800ms" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.523824 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.525896 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.525942 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.525956 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.525986 4919 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 13:30:21 crc kubenswrapper[4919]: E0109 13:30:21.526547 4919 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 09 13:30:21 crc kubenswrapper[4919]: W0109 13:30:21.556346 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 09 13:30:21 crc kubenswrapper[4919]: E0109 13:30:21.556430 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 09 13:30:21 crc kubenswrapper[4919]: W0109 13:30:21.583563 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 09 13:30:21 crc kubenswrapper[4919]: E0109 13:30:21.584694 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 09 13:30:21 crc kubenswrapper[4919]: W0109 13:30:21.597025 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 09 13:30:21 crc kubenswrapper[4919]: E0109 13:30:21.597148 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.677410 4919 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:15:58.282952623 +0000 UTC Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.677489 4919 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 124h45m36.605466642s for next certificate rotation Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.677906 4919 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.756579 4919 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e" exitCode=0 Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.756648 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e"} Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.756768 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d718181167aae3d66f7ead289e08ee74f367dfe49997cd4c34956e59b0708b67"} Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.758474 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83"} Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.758505 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"117e615f318854e99fbea99b3dd84ad35f325f8a91570042810f6f1251992c7d"} Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.759829 4919 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50" exitCode=0 Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.759877 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50"} Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.759893 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"47b550392b2aa21fbb5cbda8964b64b2bc7c72805a787e5c6983cdfcb0389d5a"} Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.760000 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.760981 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.761020 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.761032 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.762910 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.763038 4919 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7f58523d9d4832ebc703441bba8fda6beee24e80b7e364faea23c0c4275cd9c2" exitCode=0 Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.763105 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7f58523d9d4832ebc703441bba8fda6beee24e80b7e364faea23c0c4275cd9c2"} Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.763149 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d9b923cca74d7ba12388296436ab82dd9ac49153c7c1d4150f9752169dd81bda"} Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.763448 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.763933 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.763960 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.763970 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.764509 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.764539 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.764553 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.765178 4919 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667" exitCode=0 Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.765263 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667"} Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.765305 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"644e9ad50eeddaad9f0c1db02b851622cb99b91a0e010d0ddf4a34eb89f5e0bb"} Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.765432 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.766473 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.766507 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:21 crc kubenswrapper[4919]: I0109 13:30:21.766521 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:21 crc kubenswrapper[4919]: W0109 13:30:21.890374 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 09 13:30:21 crc kubenswrapper[4919]: E0109 13:30:21.890461 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 09 13:30:22 crc kubenswrapper[4919]: E0109 13:30:22.081951 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="1.6s" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.327270 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.330334 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.330380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.330395 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.330427 4919 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.771037 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.771090 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.771102 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.771111 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.773532 4919 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="53b8c9deabab605617276a16ba1a63aedfe81246b0d97f575ceb0ecea929efa7" exitCode=0 Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.773593 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"53b8c9deabab605617276a16ba1a63aedfe81246b0d97f575ceb0ecea929efa7"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.773907 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.775183 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.775239 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.775252 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.777516 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.777562 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.777577 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.777666 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.778763 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.778801 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.778812 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.778990 4919 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.779907 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.779946 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.779961 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.780044 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.779967 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0"} Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.780935 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.780970 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.780983 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.781180 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.781225 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:22 crc kubenswrapper[4919]: I0109 13:30:22.781236 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.786624 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c"} Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.786696 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.787655 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.787681 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.787690 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.799268 4919 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="30cd0abf139e3111a44e517d28e6fd1b81a96a6481f8a9941361b10bc55da501" exitCode=0 Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.799370 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"30cd0abf139e3111a44e517d28e6fd1b81a96a6481f8a9941361b10bc55da501"} Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.799621 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.801156 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.801232 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.801252 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.802426 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6d03488cb3bf92b2cf5ae2daac3b83d4925c14e6bbf4789a0ed00e4caf275a51"} Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.802546 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.802575 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.807298 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.807371 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.807394 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.808444 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.808513 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.808540 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.832768 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.832949 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.834418 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.834475 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:23 crc kubenswrapper[4919]: I0109 13:30:23.834501 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:24 crc kubenswrapper[4919]: I0109 13:30:24.801035 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:24 crc kubenswrapper[4919]: I0109 13:30:24.809936 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"744c9ccecaab78f62335d29db2d18fe4e64b26c28dcd365985f11db160641b70"} Jan 09 13:30:24 crc kubenswrapper[4919]: I0109 13:30:24.810025 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5b1c517e5ba5a7c13919a030e1df61e0a4cc5d89e2b80a2464484387a713d5a6"} Jan 09 13:30:24 crc kubenswrapper[4919]: I0109 13:30:24.810042 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a70c88bf2025bf78bf359717df98bdab692e5554a2a1a4146b228d7fbf5dee42"} Jan 09 13:30:24 crc kubenswrapper[4919]: I0109 13:30:24.810125 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:24 crc kubenswrapper[4919]: I0109 13:30:24.811180 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:24 crc kubenswrapper[4919]: I0109 13:30:24.811213 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:24 crc kubenswrapper[4919]: I0109 13:30:24.811222 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.149030 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.149294 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.150978 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.151043 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.151056 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.820103 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e74fc6258740a4e5407f1d22189c536019faf85e5fc1c5b698938ceda3c5659f"} Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.820172 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5cae117720dbdc97e6a913c5125978e3f4ec7f01dec42baab8b5fc74e2852db8"} Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.820302 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.820304 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.822058 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.822109 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.822124 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.822149 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.822167 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:25 crc kubenswrapper[4919]: I0109 13:30:25.822173 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.396425 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.396634 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.400579 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.400620 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.400633 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.402171 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.822587 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.822618 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.824437 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.824517 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.824545 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.824645 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.824703 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:26 crc kubenswrapper[4919]: I0109 13:30:26.824727 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.475597 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.475844 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.480105 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.480166 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.480186 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.483151 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.724728 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.825932 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.826887 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.828093 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.828142 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.828092 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.828183 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.828161 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.828200 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:27 crc kubenswrapper[4919]: I0109 13:30:27.936445 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:28 crc kubenswrapper[4919]: I0109 13:30:28.590943 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 09 13:30:28 crc kubenswrapper[4919]: I0109 13:30:28.591201 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:28 crc kubenswrapper[4919]: I0109 13:30:28.593033 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:28 crc kubenswrapper[4919]: I0109 13:30:28.593127 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:28 crc kubenswrapper[4919]: I0109 13:30:28.593155 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:28 crc kubenswrapper[4919]: I0109 13:30:28.828364 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:28 crc kubenswrapper[4919]: I0109 13:30:28.829548 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:28 crc kubenswrapper[4919]: I0109 13:30:28.829709 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:28 crc kubenswrapper[4919]: I0109 13:30:28.829730 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:29 crc kubenswrapper[4919]: I0109 13:30:29.396844 4919 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 13:30:29 crc kubenswrapper[4919]: I0109 13:30:29.396976 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 13:30:30 crc kubenswrapper[4919]: I0109 13:30:30.737146 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 09 13:30:30 crc kubenswrapper[4919]: I0109 13:30:30.737450 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:30 crc kubenswrapper[4919]: I0109 13:30:30.738977 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:30 crc kubenswrapper[4919]: I0109 13:30:30.739046 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:30 crc kubenswrapper[4919]: I0109 13:30:30.739058 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:30 crc kubenswrapper[4919]: E0109 13:30:30.828261 4919 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 09 13:30:32 crc kubenswrapper[4919]: E0109 13:30:32.332064 4919 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 09 13:30:32 crc kubenswrapper[4919]: I0109 13:30:32.678639 4919 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 09 13:30:32 crc kubenswrapper[4919]: E0109 13:30:32.781435 4919 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 09 13:30:33 crc kubenswrapper[4919]: E0109 13:30:33.682848 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 09 13:30:33 crc kubenswrapper[4919]: W0109 13:30:33.695594 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 09 13:30:33 crc kubenswrapper[4919]: I0109 13:30:33.695792 4919 trace.go:236] Trace[1436137371]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 13:30:23.694) (total time: 10001ms): Jan 09 13:30:33 crc kubenswrapper[4919]: Trace[1436137371]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:30:33.695) Jan 09 13:30:33 crc kubenswrapper[4919]: Trace[1436137371]: [10.001705855s] [10.001705855s] END Jan 09 13:30:33 crc kubenswrapper[4919]: E0109 13:30:33.695852 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 09 13:30:33 crc kubenswrapper[4919]: I0109 13:30:33.932392 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:33 crc kubenswrapper[4919]: I0109 13:30:33.934171 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:33 crc kubenswrapper[4919]: I0109 13:30:33.934458 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:33 crc kubenswrapper[4919]: I0109 13:30:33.934614 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:33 crc kubenswrapper[4919]: I0109 13:30:33.934776 4919 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 13:30:33 crc kubenswrapper[4919]: W0109 13:30:33.977733 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 09 13:30:33 crc kubenswrapper[4919]: I0109 13:30:33.977878 4919 trace.go:236] Trace[1461451879]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 13:30:23.975) (total time: 10002ms): Jan 09 13:30:33 crc kubenswrapper[4919]: Trace[1461451879]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (13:30:33.977) Jan 09 13:30:33 crc kubenswrapper[4919]: Trace[1461451879]: [10.00243312s] [10.00243312s] END Jan 09 13:30:33 crc kubenswrapper[4919]: E0109 13:30:33.977917 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 09 13:30:34 crc kubenswrapper[4919]: W0109 13:30:34.197547 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 09 13:30:34 crc kubenswrapper[4919]: I0109 13:30:34.197683 4919 trace.go:236] Trace[1158936599]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 13:30:24.195) (total time: 10002ms): Jan 09 13:30:34 crc kubenswrapper[4919]: Trace[1158936599]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (13:30:34.197) Jan 09 13:30:34 crc kubenswrapper[4919]: Trace[1158936599]: [10.002128335s] [10.002128335s] END Jan 09 13:30:34 crc kubenswrapper[4919]: E0109 13:30:34.197726 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 09 13:30:34 crc kubenswrapper[4919]: W0109 13:30:34.549404 4919 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 09 13:30:34 crc kubenswrapper[4919]: I0109 13:30:34.549584 4919 trace.go:236] Trace[1904872855]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 13:30:24.547) (total time: 10001ms): Jan 09 13:30:34 crc kubenswrapper[4919]: Trace[1904872855]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:30:34.549) Jan 09 13:30:34 crc kubenswrapper[4919]: Trace[1904872855]: [10.001941022s] [10.001941022s] END Jan 09 13:30:34 crc kubenswrapper[4919]: E0109 13:30:34.549617 4919 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 09 13:30:36 crc kubenswrapper[4919]: I0109 13:30:36.403162 4919 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 13:30:36 crc kubenswrapper[4919]: I0109 13:30:36.403298 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 13:30:36 crc kubenswrapper[4919]: I0109 13:30:36.818705 4919 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 09 13:30:37 crc kubenswrapper[4919]: I0109 13:30:37.943761 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:37 crc kubenswrapper[4919]: I0109 13:30:37.944732 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:37 crc kubenswrapper[4919]: I0109 13:30:37.946141 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:37 crc kubenswrapper[4919]: I0109 13:30:37.946329 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:37 crc kubenswrapper[4919]: I0109 13:30:37.946415 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:38 crc kubenswrapper[4919]: I0109 13:30:38.313546 4919 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 09 13:30:38 crc kubenswrapper[4919]: I0109 13:30:38.313632 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 09 13:30:39 crc kubenswrapper[4919]: I0109 13:30:39.397636 4919 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 13:30:39 crc kubenswrapper[4919]: I0109 13:30:39.397715 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 13:30:39 crc kubenswrapper[4919]: I0109 13:30:39.925240 4919 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 09 13:30:40 crc kubenswrapper[4919]: I0109 13:30:40.768137 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 09 13:30:40 crc kubenswrapper[4919]: I0109 13:30:40.768546 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:40 crc kubenswrapper[4919]: I0109 13:30:40.770293 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:40 crc kubenswrapper[4919]: I0109 13:30:40.770347 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:40 crc kubenswrapper[4919]: I0109 13:30:40.770367 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:40 crc kubenswrapper[4919]: I0109 13:30:40.786450 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 09 13:30:40 crc kubenswrapper[4919]: E0109 13:30:40.828513 4919 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 09 13:30:40 crc kubenswrapper[4919]: I0109 13:30:40.859262 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:40 crc kubenswrapper[4919]: I0109 13:30:40.860912 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:40 crc kubenswrapper[4919]: I0109 13:30:40.860994 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:40 crc kubenswrapper[4919]: I0109 13:30:40.861021 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.411432 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.411708 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.413974 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.414030 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.414071 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.418842 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.861945 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.862042 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.863624 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.863692 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:41 crc kubenswrapper[4919]: I0109 13:30:41.863705 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.300316 4919 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.304319 4919 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 09 13:30:43 crc kubenswrapper[4919]: E0109 13:30:43.310061 4919 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.328943 4919 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.355785 4919 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38774->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.355899 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38774->192.168.126.11:17697: read: connection reset by peer" Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.355817 4919 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38776->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.356009 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38776->192.168.126.11:17697: read: connection reset by peer" Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.357473 4919 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.357571 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.871031 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.873549 4919 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c" exitCode=255 Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.873627 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c"} Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.873946 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.875460 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.875511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.875526 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:43 crc kubenswrapper[4919]: I0109 13:30:43.876412 4919 scope.go:117] "RemoveContainer" containerID="66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.164290 4919 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.343259 4919 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.681017 4919 apiserver.go:52] "Watching apiserver" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.683761 4919 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.684171 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.684678 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.684908 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.685974 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.686155 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.686731 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.687375 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.687500 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.687708 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.689966 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.694022 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.698108 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.698279 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.698415 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.698468 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.698525 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.698568 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.698634 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.698716 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.739185 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.758693 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.770436 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.779331 4919 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.793004 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.793823 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.811920 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815361 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815418 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815479 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815505 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815531 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815555 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815578 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815605 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815633 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815658 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815688 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815711 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815736 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815744 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815760 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815787 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815808 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815858 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815887 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815911 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815935 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815934 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815959 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.815999 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816050 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816080 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816106 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816135 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816165 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816184 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816204 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816060 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816168 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817188 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816261 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816423 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816504 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816589 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816619 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816794 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816836 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816832 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.816949 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817039 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817137 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817328 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817401 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817602 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817753 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817785 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817807 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817807 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817829 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817864 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817887 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817915 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817933 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817952 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.817979 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818000 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818023 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818047 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818071 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818100 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818122 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818144 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818163 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818151 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818183 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818303 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818346 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818378 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818421 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.818447 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822382 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822562 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822596 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822633 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822664 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822735 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822781 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822819 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822878 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822917 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822941 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822954 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.822979 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823012 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823037 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823066 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823095 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823127 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823151 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823179 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823260 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823298 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823320 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823354 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823382 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823413 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823440 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823478 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823508 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823539 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823575 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823600 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823628 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823650 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823675 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823702 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823727 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823752 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823779 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823800 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823826 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823852 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823882 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823917 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823945 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823972 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823997 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.824035 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.824066 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.824093 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.824126 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.824289 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.825792 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.826319 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.826489 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.826546 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.826606 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.826666 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.830117 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.830190 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.830239 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.842594 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823434 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.823702 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.824284 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.824367 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.824483 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.824619 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.824855 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.825009 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.825089 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.825034 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.825150 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.825677 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.825707 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.825719 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.837527 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.837632 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.837847 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.838059 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.831752 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:30:45.33172152 +0000 UTC m=+24.879560970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.844984 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845070 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845111 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845143 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845184 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845240 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845269 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845318 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845367 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845415 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845459 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845505 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845549 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845588 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845630 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845683 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845739 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845782 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845825 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845882 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845929 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.845986 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.846034 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.846074 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.846130 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.838282 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.839685 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.841703 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.842253 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.842795 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.844061 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.844360 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854290 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854407 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854447 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854494 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854529 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854561 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854593 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854623 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854655 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854682 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854730 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854764 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854804 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854834 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854864 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854898 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854926 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854961 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.854992 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855020 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855053 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855089 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855117 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855149 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855183 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855232 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855257 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855287 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855337 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855367 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855399 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855428 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855459 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855489 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855518 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855548 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855598 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855639 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855665 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855696 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855732 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855763 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855797 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855828 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855871 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855900 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855936 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.855972 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.856001 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.856033 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.856067 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.856094 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.856126 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857081 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857136 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857170 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857203 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857255 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857286 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857320 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857355 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857385 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857416 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857447 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857487 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.857582 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.858050 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.858121 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.858192 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.858256 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.858285 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.858506 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.858791 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.859901 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.860157 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.860384 4919 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.860479 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.861005 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.861807 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.861816 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.862402 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.862431 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.862586 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.862429 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.863332 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.863805 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.864579 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.861273 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.862856 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.863016 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.864916 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.864948 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.864972 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.864997 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.865024 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.865065 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.865120 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.865936 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.867719 4919 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.867773 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.867781 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.867970 4919 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.868081 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:45.368051022 +0000 UTC m=+24.915890482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868014 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.868304 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:45.368291177 +0000 UTC m=+24.916130637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868503 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868627 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868610 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868872 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868903 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868930 4919 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868946 4919 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868961 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868981 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.868997 4919 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869013 4919 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869073 4919 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869093 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869108 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869122 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869137 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869156 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869171 4919 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869192 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869240 4919 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869258 4919 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869276 4919 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869294 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869309 4919 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869329 4919 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869346 4919 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869362 4919 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869377 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869398 4919 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869415 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869430 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869446 4919 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869465 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869479 4919 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869494 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869512 4919 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869526 4919 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869543 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869560 4919 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869615 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869631 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869641 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869835 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869856 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869869 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869939 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869964 4919 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.870000 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.870015 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.870032 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.870048 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.870084 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.870095 4919 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.870109 4919 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.870119 4919 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869545 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.869841 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.870117 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.870335 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871326 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871359 4919 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871389 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871405 4919 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871451 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871471 4919 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871487 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871503 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871651 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871782 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871825 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871843 4919 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871860 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871881 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871898 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871914 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871929 4919 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871947 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871961 4919 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.871976 4919 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.872385 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.872849 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.872275 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.872893 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.872948 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.873130 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.873512 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.874197 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.873559 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.874649 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.875022 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.875318 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.875526 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.875612 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.875766 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.875783 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.876080 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.876427 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.876557 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.876610 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.876853 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.876927 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.877064 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.877306 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.886790 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.886823 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.886838 4919 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.887307 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:45.387279413 +0000 UTC m=+24.935118863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.890377 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.890459 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.890592 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.894627 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.895091 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.895120 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.895133 4919 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:44 crc kubenswrapper[4919]: E0109 13:30:44.895186 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:45.395166148 +0000 UTC m=+24.943005598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.895317 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.895510 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.895685 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.895979 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.896019 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.896089 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.896957 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.898025 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.898180 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.904523 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.906050 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.906165 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.906462 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.907617 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.908003 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.910451 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.910659 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.911298 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.911515 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.914612 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.914683 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.915011 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.915053 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.915441 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.915611 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.918808 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.919401 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.919462 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.919605 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.920536 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.923867 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c"} Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.924403 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.924598 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.927985 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.928481 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.928701 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.928904 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.929587 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.929864 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.930556 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.931479 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.933410 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.933683 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.933669 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.933742 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.934608 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.934899 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.935058 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.935416 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.935490 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.935813 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.935960 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.936002 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.936201 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.936735 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.936952 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.937376 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.938272 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.938508 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.938686 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.938739 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.938927 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.941478 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.941557 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.944503 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.944569 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.944811 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.948709 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.948770 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.950019 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.950368 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.951640 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.951739 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.952274 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.952790 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.955572 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.957315 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.957838 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.958329 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.958497 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.960420 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.960537 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.960561 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.961032 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.962577 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.963286 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.963811 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.963979 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.966428 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.966798 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.966928 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.972964 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973397 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973432 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973491 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973504 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973514 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973525 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973537 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973546 4919 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973557 4919 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973571 4919 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973580 4919 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973591 4919 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973604 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973615 4919 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973624 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973633 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973643 4919 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973651 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973662 4919 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973671 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973681 4919 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973691 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973701 4919 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973709 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973721 4919 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973729 4919 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973738 4919 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973746 4919 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973757 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973765 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973775 4919 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973807 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973816 4919 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973827 4919 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973835 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973846 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973854 4919 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973865 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973891 4919 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973913 4919 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973924 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973935 4919 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973949 4919 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973961 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973971 4919 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973979 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973988 4919 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.973999 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974007 4919 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974016 4919 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974024 4919 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974064 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974076 4919 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974087 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974099 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974110 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974120 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974130 4919 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974138 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974146 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974156 4919 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974165 4919 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974174 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974183 4919 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974201 4919 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974223 4919 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974234 4919 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974243 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974252 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974261 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974269 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974278 4919 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974287 4919 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974296 4919 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974305 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974313 4919 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974321 4919 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974330 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974339 4919 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974348 4919 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974358 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974367 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974375 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974385 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974393 4919 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974402 4919 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974411 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974420 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974430 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974438 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974446 4919 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974427 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974456 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974498 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974528 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974545 4919 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974557 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974571 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974587 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974600 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974600 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974614 4919 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974703 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974719 4919 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974731 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974744 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974755 4919 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974764 4919 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974776 4919 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974785 4919 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974799 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974808 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974818 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974828 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974867 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974878 4919 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974889 4919 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974898 4919 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974909 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974919 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974931 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974942 4919 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974951 4919 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.974558 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.985250 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.997422 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:44 crc kubenswrapper[4919]: I0109 13:30:44.998945 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.007501 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.011978 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.033467 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.033747 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 13:30:45 crc kubenswrapper[4919]: W0109 13:30:45.047182 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-4dc00bbf6a30a993fde8ba7efaf90862770844c710bd416897b69b32ec1924fa WatchSource:0}: Error finding container 4dc00bbf6a30a993fde8ba7efaf90862770844c710bd416897b69b32ec1924fa: Status 404 returned error can't find the container with id 4dc00bbf6a30a993fde8ba7efaf90862770844c710bd416897b69b32ec1924fa Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.065436 4919 csr.go:261] certificate signing request csr-dj6mx is approved, waiting to be issued Jan 09 13:30:45 crc kubenswrapper[4919]: W0109 13:30:45.067971 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4abaadb24fa0e3e4d5362a4efb0b4e705b2ffa322d7501d7595fb3b7000ef364 WatchSource:0}: Error finding container 4abaadb24fa0e3e4d5362a4efb0b4e705b2ffa322d7501d7595fb3b7000ef364: Status 404 returned error can't find the container with id 4abaadb24fa0e3e4d5362a4efb0b4e705b2ffa322d7501d7595fb3b7000ef364 Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.068296 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.076608 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.076639 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.076663 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.076678 4919 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.087297 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.092259 4919 csr.go:257] certificate signing request csr-dj6mx is issued Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.105068 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.117804 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.379586 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.379672 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.379722 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.379788 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:30:46.379759933 +0000 UTC m=+25.927599383 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.379826 4919 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.379859 4919 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.379885 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:46.379869966 +0000 UTC m=+25.927709416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.379967 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:46.379946178 +0000 UTC m=+25.927785638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.480723 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.480784 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.480907 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.480930 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.480943 4919 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.480944 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.480982 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.480996 4919 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.480998 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:46.480983897 +0000 UTC m=+26.028823347 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.481055 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:46.481038129 +0000 UTC m=+26.028877579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.492548 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-9z7cc"] Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.492869 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9z7cc" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.498690 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.499097 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.499407 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.529403 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:45Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.544622 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:45Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.559700 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:45Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.575041 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:45Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.581683 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkvnj\" (UniqueName: \"kubernetes.io/projected/1115c0ba-16d5-4e81-a4b4-07ba7f360825-kube-api-access-jkvnj\") pod \"node-resolver-9z7cc\" (UID: \"1115c0ba-16d5-4e81-a4b4-07ba7f360825\") " pod="openshift-dns/node-resolver-9z7cc" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.581724 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1115c0ba-16d5-4e81-a4b4-07ba7f360825-hosts-file\") pod \"node-resolver-9z7cc\" (UID: \"1115c0ba-16d5-4e81-a4b4-07ba7f360825\") " pod="openshift-dns/node-resolver-9z7cc" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.591092 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:45Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.614366 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:45Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.630332 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:45Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.644039 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:45Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.682563 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkvnj\" (UniqueName: \"kubernetes.io/projected/1115c0ba-16d5-4e81-a4b4-07ba7f360825-kube-api-access-jkvnj\") pod \"node-resolver-9z7cc\" (UID: \"1115c0ba-16d5-4e81-a4b4-07ba7f360825\") " pod="openshift-dns/node-resolver-9z7cc" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.682607 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1115c0ba-16d5-4e81-a4b4-07ba7f360825-hosts-file\") pod \"node-resolver-9z7cc\" (UID: \"1115c0ba-16d5-4e81-a4b4-07ba7f360825\") " pod="openshift-dns/node-resolver-9z7cc" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.682698 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1115c0ba-16d5-4e81-a4b4-07ba7f360825-hosts-file\") pod \"node-resolver-9z7cc\" (UID: \"1115c0ba-16d5-4e81-a4b4-07ba7f360825\") " pod="openshift-dns/node-resolver-9z7cc" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.705935 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkvnj\" (UniqueName: \"kubernetes.io/projected/1115c0ba-16d5-4e81-a4b4-07ba7f360825-kube-api-access-jkvnj\") pod \"node-resolver-9z7cc\" (UID: \"1115c0ba-16d5-4e81-a4b4-07ba7f360825\") " pod="openshift-dns/node-resolver-9z7cc" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.751016 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.751083 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.751155 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.751281 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.805361 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9z7cc" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.895753 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9m5lv"] Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.896184 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.896922 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-97zdz"] Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.897496 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.902048 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.902639 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.902712 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.902933 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.903225 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kgw8v"] Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.903703 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.906442 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.906956 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.907493 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w74hl"] Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.908314 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.912109 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.912304 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.912440 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.912588 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.912728 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.913568 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.918021 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.918479 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.918861 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.919070 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.919295 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.919509 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.919715 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.935600 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:45Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.936684 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9z7cc" event={"ID":"1115c0ba-16d5-4e81-a4b4-07ba7f360825","Type":"ContainerStarted","Data":"2428ee9dfa755d54cd9726b39515840b25415df55d8a212d547597e4d2b6f8c1"} Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.940463 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b"} Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.940490 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4dc00bbf6a30a993fde8ba7efaf90862770844c710bd416897b69b32ec1924fa"} Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.944390 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81"} Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.944435 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06"} Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.944448 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ca3fcbddff893349d21eac2a50f9b8e4a5b4f5912fb56697039fa77f94ae8dd9"} Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.945953 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4abaadb24fa0e3e4d5362a4efb0b4e705b2ffa322d7501d7595fb3b7000ef364"} Jan 09 13:30:45 crc kubenswrapper[4919]: E0109 13:30:45.976003 4919 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.983337 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:45Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.984724 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-system-cni-dir\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.984779 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-run-multus-certs\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.984822 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-os-release\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.984847 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-daemon-config\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.984873 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-mcd-auth-proxy-config\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989200 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21befbc8-9e98-4557-89af-a116cc8c484c-cni-binary-copy\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989279 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-cni-dir\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989307 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-os-release\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989336 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n299m\" (UniqueName: \"kubernetes.io/projected/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-kube-api-access-n299m\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989402 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-proxy-tls\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989431 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-var-lib-kubelet\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989450 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-hostroot\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989474 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-cnibin\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989495 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-system-cni-dir\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989518 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-socket-dir-parent\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989536 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-var-lib-cni-multus\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989568 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-run-netns\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989609 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhh4c\" (UniqueName: \"kubernetes.io/projected/21befbc8-9e98-4557-89af-a116cc8c484c-kube-api-access-fhh4c\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989646 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-conf-dir\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989673 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-rootfs\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989710 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989736 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-cni-binary-copy\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989754 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-run-k8s-cni-cncf-io\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989795 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-cnibin\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989821 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/21befbc8-9e98-4557-89af-a116cc8c484c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989841 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-etc-kubernetes\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989871 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srz24\" (UniqueName: \"kubernetes.io/projected/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-kube-api-access-srz24\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:45 crc kubenswrapper[4919]: I0109 13:30:45.989897 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-var-lib-cni-bin\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.007733 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.026301 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.048427 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.069891 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.089264 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.090458 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-var-lib-openvswitch\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.090500 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-node-log\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.090545 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhh4c\" (UniqueName: \"kubernetes.io/projected/21befbc8-9e98-4557-89af-a116cc8c484c-kube-api-access-fhh4c\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.090565 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-conf-dir\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.090583 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-rootfs\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.090603 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-netns\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.090690 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovn-node-metrics-cert\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.090739 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-rootfs\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.090880 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-conf-dir\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.090958 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091015 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-cni-binary-copy\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091064 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-run-k8s-cni-cncf-io\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091172 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-run-k8s-cni-cncf-io\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091218 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091268 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-cnibin\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091267 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091286 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/21befbc8-9e98-4557-89af-a116cc8c484c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091343 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-etc-kubernetes\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091365 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srz24\" (UniqueName: \"kubernetes.io/projected/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-kube-api-access-srz24\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091387 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-systemd\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091406 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-var-lib-cni-bin\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091425 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-etc-openvswitch\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091440 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-log-socket\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091455 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-config\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091477 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-system-cni-dir\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091505 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-run-multus-certs\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091535 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-run-multus-certs\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091537 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-systemd-units\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091560 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-bin\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091561 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-etc-kubernetes\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091580 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-os-release\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091603 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-daemon-config\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091636 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-mcd-auth-proxy-config\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091656 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-netd\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091673 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-os-release\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091711 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21befbc8-9e98-4557-89af-a116cc8c484c-cni-binary-copy\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091732 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-cni-dir\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091741 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-var-lib-cni-bin\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091750 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n299m\" (UniqueName: \"kubernetes.io/projected/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-kube-api-access-n299m\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091775 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-kubelet\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091805 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-cni-binary-copy\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091905 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-os-release\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091953 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-cni-dir\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.091984 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-cnibin\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092027 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/21befbc8-9e98-4557-89af-a116cc8c484c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092361 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21befbc8-9e98-4557-89af-a116cc8c484c-cni-binary-copy\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092531 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-daemon-config\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092573 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-script-lib\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092594 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-os-release\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092615 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21befbc8-9e98-4557-89af-a116cc8c484c-system-cni-dir\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092636 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-proxy-tls\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092657 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-var-lib-kubelet\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092681 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-var-lib-kubelet\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092688 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-hostroot\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092712 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-cnibin\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092731 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-openvswitch\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092749 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-ovn-kubernetes\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092772 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6jvn\" (UniqueName: \"kubernetes.io/projected/4a11a9b6-2419-4f04-b35e-ba296d70b705-kube-api-access-h6jvn\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092790 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-system-cni-dir\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092800 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-cnibin\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092814 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-hostroot\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092808 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-socket-dir-parent\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092838 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-multus-socket-dir-parent\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092869 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-var-lib-cni-multus\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092883 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-system-cni-dir\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092894 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-slash\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092915 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-var-lib-cni-multus\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092958 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-mcd-auth-proxy-config\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092964 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-run-netns\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.093016 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-ovn\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.093037 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-env-overrides\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.092987 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-host-run-netns\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.093104 4919 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-09 13:25:45 +0000 UTC, rotation deadline is 2026-09-22 11:02:27.78957881 +0000 UTC Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.093140 4919 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6141h31m41.696440175s for next certificate rotation Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.098583 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-proxy-tls\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.106233 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srz24\" (UniqueName: \"kubernetes.io/projected/11e19b4a-0888-460f-bf97-5dd0ddda6e8c-kube-api-access-srz24\") pod \"multus-kgw8v\" (UID: \"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\") " pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.110918 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhh4c\" (UniqueName: \"kubernetes.io/projected/21befbc8-9e98-4557-89af-a116cc8c484c-kube-api-access-fhh4c\") pod \"multus-additional-cni-plugins-97zdz\" (UID: \"21befbc8-9e98-4557-89af-a116cc8c484c\") " pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.114385 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n299m\" (UniqueName: \"kubernetes.io/projected/b842de7d-a43c-4884-a3c4-c3ffa2eabc7c-kube-api-access-n299m\") pod \"machine-config-daemon-9m5lv\" (UID: \"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\") " pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.120112 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.130894 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.145860 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.164366 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.187715 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194290 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-slash\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194360 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-openvswitch\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194391 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-ovn-kubernetes\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194420 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6jvn\" (UniqueName: \"kubernetes.io/projected/4a11a9b6-2419-4f04-b35e-ba296d70b705-kube-api-access-h6jvn\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194454 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-ovn\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194478 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-env-overrides\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194492 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-openvswitch\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194505 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-slash\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194564 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-ovn\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194569 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-var-lib-openvswitch\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194579 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-ovn-kubernetes\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194508 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-var-lib-openvswitch\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194782 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-node-log\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194860 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-netns\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194904 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovn-node-metrics-cert\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194933 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-netns\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194965 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.194998 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-systemd\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195032 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-etc-openvswitch\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195075 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-log-socket\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195091 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195110 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-config\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195135 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-etc-openvswitch\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195152 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-systemd-units\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195170 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-systemd\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195180 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-bin\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195274 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-netd\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195320 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-kubelet\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195349 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-script-lib\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195394 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-log-socket\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195353 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-env-overrides\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195464 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-netd\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195475 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-systemd-units\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195496 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-kubelet\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195519 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-bin\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195723 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-node-log\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.195988 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-config\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.196038 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-script-lib\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.207003 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.219116 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.223681 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.239923 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.248162 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-97zdz" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.251512 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.255574 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kgw8v" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.272380 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.284740 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovn-node-metrics-cert\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.285960 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6jvn\" (UniqueName: \"kubernetes.io/projected/4a11a9b6-2419-4f04-b35e-ba296d70b705-kube-api-access-h6jvn\") pod \"ovnkube-node-w74hl\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.293376 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.316125 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.342171 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.365786 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.396982 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.397151 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:30:48.397125554 +0000 UTC m=+27.944965004 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.397198 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.397320 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.397461 4919 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.397495 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:48.397488413 +0000 UTC m=+27.945327853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.397822 4919 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.397846 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:48.397839481 +0000 UTC m=+27.945678931 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.409501 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.414378 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.423718 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.428966 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.445186 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.462049 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.482926 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.498747 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.498820 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.498978 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.499000 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.499015 4919 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.499073 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:48.499055705 +0000 UTC m=+28.046895155 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.499483 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.499573 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.499589 4919 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.499623 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:48.499612038 +0000 UTC m=+28.047451498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.499851 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.510996 4919 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.512778 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.512909 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.512921 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.513264 4919 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.517888 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.524354 4919 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.524659 4919 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.525812 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.525853 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.525864 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.525884 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.525896 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:46Z","lastTransitionTime":"2026-01-09T13:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.538711 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.546962 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.552960 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.555118 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.555141 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.555150 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.555166 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.555177 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:46Z","lastTransitionTime":"2026-01-09T13:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.560314 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.569101 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.572800 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.572821 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.572830 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.572845 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.572854 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:46Z","lastTransitionTime":"2026-01-09T13:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:46 crc kubenswrapper[4919]: W0109 13:30:46.580520 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a11a9b6_2419_4f04_b35e_ba296d70b705.slice/crio-dc66ccfb1667d3e0c668f7bdf2a6d268828f7f4ca9f23f61f44b5e91066afa4c WatchSource:0}: Error finding container dc66ccfb1667d3e0c668f7bdf2a6d268828f7f4ca9f23f61f44b5e91066afa4c: Status 404 returned error can't find the container with id dc66ccfb1667d3e0c668f7bdf2a6d268828f7f4ca9f23f61f44b5e91066afa4c Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.589326 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.590286 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.602088 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.602135 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.602165 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.602187 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.602202 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:46Z","lastTransitionTime":"2026-01-09T13:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.612381 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.622877 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.628179 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.628241 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.628254 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.628271 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.628282 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:46Z","lastTransitionTime":"2026-01-09T13:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.632028 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.641230 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.641353 4919 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.643362 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.643411 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.643424 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.643462 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.643482 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:46Z","lastTransitionTime":"2026-01-09T13:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.648845 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.668583 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.726470 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.741879 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.746502 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.746528 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.746536 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.746550 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.746561 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:46Z","lastTransitionTime":"2026-01-09T13:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.753725 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.753845 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.756065 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.756660 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.757628 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.758256 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.758916 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.760702 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.761516 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.762874 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.763797 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.764842 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.765539 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.766344 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.767304 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.767809 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.769422 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.769975 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.771283 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.771683 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.772376 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.773458 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.774043 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.774858 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.776591 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.777228 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.778255 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.778858 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.780185 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.780685 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.782792 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.783442 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.783905 4919 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.784004 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.786068 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.786693 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.787095 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.788611 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.789653 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.790198 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.792487 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.793897 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.796448 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.797185 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.798309 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.798935 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.799951 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.800588 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.801585 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.802821 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.805677 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.806592 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.807255 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.808467 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.809158 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.809939 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.810243 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.830956 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.849663 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.849727 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.849741 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.849761 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.849796 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:46Z","lastTransitionTime":"2026-01-09T13:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.852939 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.871803 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.887106 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.917391 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.946322 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.950518 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.950585 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.950598 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"0f0ad7077c0e9a259bdc00b14de02dcc07ed1e1ba78ecc669bcd270627d30b3c"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.951586 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.951618 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.951634 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.951652 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.951664 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:46Z","lastTransitionTime":"2026-01-09T13:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.952708 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9z7cc" event={"ID":"1115c0ba-16d5-4e81-a4b4-07ba7f360825","Type":"ContainerStarted","Data":"6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.954248 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f" exitCode=0 Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.954301 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.954346 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"dc66ccfb1667d3e0c668f7bdf2a6d268828f7f4ca9f23f61f44b5e91066afa4c"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.956164 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgw8v" event={"ID":"11e19b4a-0888-460f-bf97-5dd0ddda6e8c","Type":"ContainerStarted","Data":"3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.956192 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgw8v" event={"ID":"11e19b4a-0888-460f-bf97-5dd0ddda6e8c","Type":"ContainerStarted","Data":"638f1e9f1a0e6a1efb7f9afc3bac08438f95bf5749ac4ee869695f116cdb602b"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.957802 4919 generic.go:334] "Generic (PLEG): container finished" podID="21befbc8-9e98-4557-89af-a116cc8c484c" containerID="06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf" exitCode=0 Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.957902 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" event={"ID":"21befbc8-9e98-4557-89af-a116cc8c484c","Type":"ContainerDied","Data":"06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf"} Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.957937 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" event={"ID":"21befbc8-9e98-4557-89af-a116cc8c484c","Type":"ContainerStarted","Data":"95b58d9d16c1b091d0be696376799ce8c348092d87bfafe08f46448fc70de08c"} Jan 09 13:30:46 crc kubenswrapper[4919]: E0109 13:30:46.966499 4919 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.971924 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.985594 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:46 crc kubenswrapper[4919]: I0109 13:30:46.998238 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:46Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.011167 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.025480 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.037886 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.053085 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.066301 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.066344 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.066354 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.066375 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.066389 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:47Z","lastTransitionTime":"2026-01-09T13:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.067913 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.081036 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.099433 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.119437 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.132928 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.146963 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.165962 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.169528 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.169559 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.169570 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.169586 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.169599 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:47Z","lastTransitionTime":"2026-01-09T13:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.181526 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.201553 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.272188 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.272253 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.272268 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.272286 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.272296 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:47Z","lastTransitionTime":"2026-01-09T13:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.374889 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.374925 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.374934 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.374952 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.374961 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:47Z","lastTransitionTime":"2026-01-09T13:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.478226 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.478270 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.478289 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.478309 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.478325 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:47Z","lastTransitionTime":"2026-01-09T13:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.582080 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.582126 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.582136 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.582156 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.582168 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:47Z","lastTransitionTime":"2026-01-09T13:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.684712 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.684763 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.684774 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.684796 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.684810 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:47Z","lastTransitionTime":"2026-01-09T13:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.751580 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.751635 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:47 crc kubenswrapper[4919]: E0109 13:30:47.752230 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:30:47 crc kubenswrapper[4919]: E0109 13:30:47.752355 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.788619 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.788666 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.788678 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.788695 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.788707 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:47Z","lastTransitionTime":"2026-01-09T13:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.891988 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.892021 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.892031 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.892046 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.892056 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:47Z","lastTransitionTime":"2026-01-09T13:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.967430 4919 generic.go:334] "Generic (PLEG): container finished" podID="21befbc8-9e98-4557-89af-a116cc8c484c" containerID="660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260" exitCode=0 Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.967483 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" event={"ID":"21befbc8-9e98-4557-89af-a116cc8c484c","Type":"ContainerDied","Data":"660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.971608 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.977995 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.978068 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.978087 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.978102 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.978119 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4"} Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.983095 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:47Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.995247 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.995304 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.995320 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.995342 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:47 crc kubenswrapper[4919]: I0109 13:30:47.995363 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:47Z","lastTransitionTime":"2026-01-09T13:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.002521 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.021766 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.037140 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.051550 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.066775 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.083857 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.098034 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.098087 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.098097 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.098117 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.098130 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:48Z","lastTransitionTime":"2026-01-09T13:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.103174 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.120160 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.137459 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.160735 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.175964 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.192593 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.200841 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.200902 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.200931 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.200959 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.200977 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:48Z","lastTransitionTime":"2026-01-09T13:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.207055 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.224933 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.249603 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.265478 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.284012 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.300676 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.304406 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.304429 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.304440 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.304460 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.304477 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:48Z","lastTransitionTime":"2026-01-09T13:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.320571 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.335270 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.349122 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.361970 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.396011 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.407521 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.407575 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.407587 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.407607 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.407619 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:48Z","lastTransitionTime":"2026-01-09T13:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.421066 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.423584 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.423752 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.423819 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:30:52.423796967 +0000 UTC m=+31.971636417 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.423860 4919 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.423916 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:52.423905639 +0000 UTC m=+31.971745089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.424036 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.424285 4919 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.424434 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:52.424400321 +0000 UTC m=+31.972239781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.439752 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.510773 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.510880 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.510895 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.510916 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.510929 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:48Z","lastTransitionTime":"2026-01-09T13:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.525449 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.525642 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.525695 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.525865 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.525961 4919 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.526095 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:52.526076515 +0000 UTC m=+32.073915975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.525866 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.526338 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.526417 4919 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.526528 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 13:30:52.526515926 +0000 UTC m=+32.074355396 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.613426 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.613472 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.613484 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.613504 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.613517 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:48Z","lastTransitionTime":"2026-01-09T13:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.716757 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.716806 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.716822 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.716842 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.716858 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:48Z","lastTransitionTime":"2026-01-09T13:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.751540 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:48 crc kubenswrapper[4919]: E0109 13:30:48.751712 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.820551 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.820599 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.820615 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.820634 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.820648 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:48Z","lastTransitionTime":"2026-01-09T13:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.923048 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.923096 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.923108 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.923124 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.923136 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:48Z","lastTransitionTime":"2026-01-09T13:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.986372 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0"} Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.989320 4919 generic.go:334] "Generic (PLEG): container finished" podID="21befbc8-9e98-4557-89af-a116cc8c484c" containerID="cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4" exitCode=0 Jan 09 13:30:48 crc kubenswrapper[4919]: I0109 13:30:48.989351 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" event={"ID":"21befbc8-9e98-4557-89af-a116cc8c484c","Type":"ContainerDied","Data":"cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.009731 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.026317 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.026366 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.026382 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.026403 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.026417 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:49Z","lastTransitionTime":"2026-01-09T13:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.031147 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.038097 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9bzs4"] Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.038511 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.040492 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.040612 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.040738 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.041645 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.049305 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.070164 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.087265 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.100088 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.113170 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.132019 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.132416 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd1e83cb-5a48-4331-b403-d7a07e8aa67f-host\") pod \"node-ca-9bzs4\" (UID: \"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\") " pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.132517 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fd1e83cb-5a48-4331-b403-d7a07e8aa67f-serviceca\") pod \"node-ca-9bzs4\" (UID: \"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\") " pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.132557 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd6n5\" (UniqueName: \"kubernetes.io/projected/fd1e83cb-5a48-4331-b403-d7a07e8aa67f-kube-api-access-kd6n5\") pod \"node-ca-9bzs4\" (UID: \"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\") " pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.132429 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.133350 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.133363 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.133380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.133392 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:49Z","lastTransitionTime":"2026-01-09T13:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.145858 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.160740 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.172417 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.189113 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.203550 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.218550 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.232928 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fd1e83cb-5a48-4331-b403-d7a07e8aa67f-serviceca\") pod \"node-ca-9bzs4\" (UID: \"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\") " pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.232998 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd6n5\" (UniqueName: \"kubernetes.io/projected/fd1e83cb-5a48-4331-b403-d7a07e8aa67f-kube-api-access-kd6n5\") pod \"node-ca-9bzs4\" (UID: \"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\") " pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.233057 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd1e83cb-5a48-4331-b403-d7a07e8aa67f-host\") pod \"node-ca-9bzs4\" (UID: \"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\") " pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.233126 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd1e83cb-5a48-4331-b403-d7a07e8aa67f-host\") pod \"node-ca-9bzs4\" (UID: \"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\") " pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.234090 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fd1e83cb-5a48-4331-b403-d7a07e8aa67f-serviceca\") pod \"node-ca-9bzs4\" (UID: \"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\") " pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.235932 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.235971 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.235984 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.236009 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.236023 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:49Z","lastTransitionTime":"2026-01-09T13:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.243800 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.259686 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.260257 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd6n5\" (UniqueName: \"kubernetes.io/projected/fd1e83cb-5a48-4331-b403-d7a07e8aa67f-kube-api-access-kd6n5\") pod \"node-ca-9bzs4\" (UID: \"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\") " pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.290516 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.303942 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.317724 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.338567 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.338616 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.338629 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.338650 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.338662 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:49Z","lastTransitionTime":"2026-01-09T13:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.343562 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.353914 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.355080 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9bzs4" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.366414 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: W0109 13:30:49.369949 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd1e83cb_5a48_4331_b403_d7a07e8aa67f.slice/crio-6efa2a9aa289b0a6b4779e11e5bf42fa76bbeae15394c436ea758ae7201b60f2 WatchSource:0}: Error finding container 6efa2a9aa289b0a6b4779e11e5bf42fa76bbeae15394c436ea758ae7201b60f2: Status 404 returned error can't find the container with id 6efa2a9aa289b0a6b4779e11e5bf42fa76bbeae15394c436ea758ae7201b60f2 Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.382665 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.397518 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.423382 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.439325 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.441562 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.441611 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.441623 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.441644 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.441660 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:49Z","lastTransitionTime":"2026-01-09T13:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.454992 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:49Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.543830 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.543881 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.543895 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.543920 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.543938 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:49Z","lastTransitionTime":"2026-01-09T13:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.646643 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.646711 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.646726 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.646751 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.646768 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:49Z","lastTransitionTime":"2026-01-09T13:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.749972 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.750024 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.750040 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.750063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.750079 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:49Z","lastTransitionTime":"2026-01-09T13:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.750642 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.750718 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:49 crc kubenswrapper[4919]: E0109 13:30:49.750754 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:30:49 crc kubenswrapper[4919]: E0109 13:30:49.750922 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.853395 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.853450 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.853467 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.853487 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.853500 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:49Z","lastTransitionTime":"2026-01-09T13:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.956498 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.956567 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.956581 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.956605 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.956621 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:49Z","lastTransitionTime":"2026-01-09T13:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.997728 4919 generic.go:334] "Generic (PLEG): container finished" podID="21befbc8-9e98-4557-89af-a116cc8c484c" containerID="8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4" exitCode=0 Jan 09 13:30:49 crc kubenswrapper[4919]: I0109 13:30:49.997820 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" event={"ID":"21befbc8-9e98-4557-89af-a116cc8c484c","Type":"ContainerDied","Data":"8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.001189 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9bzs4" event={"ID":"fd1e83cb-5a48-4331-b403-d7a07e8aa67f","Type":"ContainerStarted","Data":"389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.001273 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9bzs4" event={"ID":"fd1e83cb-5a48-4331-b403-d7a07e8aa67f","Type":"ContainerStarted","Data":"6efa2a9aa289b0a6b4779e11e5bf42fa76bbeae15394c436ea758ae7201b60f2"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.019630 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.043996 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.059988 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.060048 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.060068 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.060099 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.060120 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:50Z","lastTransitionTime":"2026-01-09T13:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.060505 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.083905 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.100610 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.115584 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.138197 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.156865 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.163439 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.163507 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.163530 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.163559 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.163578 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:50Z","lastTransitionTime":"2026-01-09T13:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.174750 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.193982 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.216290 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.236230 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.252018 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.267653 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.267707 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.267721 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.267746 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.267757 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:50Z","lastTransitionTime":"2026-01-09T13:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.269917 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.336600 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.360155 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.370939 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.370992 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.371004 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.371025 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.371039 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:50Z","lastTransitionTime":"2026-01-09T13:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.386200 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.406253 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.421790 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.434822 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.447695 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.460684 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.474014 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.474074 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.474085 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.474107 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.474124 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:50Z","lastTransitionTime":"2026-01-09T13:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.474958 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.490569 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.505114 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.517994 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.536553 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.548564 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.577355 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.577406 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.577416 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.577436 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.577449 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:50Z","lastTransitionTime":"2026-01-09T13:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.604706 4919 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.680481 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.680518 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.680528 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.680544 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.680557 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:50Z","lastTransitionTime":"2026-01-09T13:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.751591 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:50 crc kubenswrapper[4919]: E0109 13:30:50.751786 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.766891 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.778499 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.783692 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.783756 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.783790 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.783823 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.783841 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:50Z","lastTransitionTime":"2026-01-09T13:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.794611 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.810518 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.834037 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.855562 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.872706 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.888247 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.888282 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.888291 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.888308 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.888319 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:50Z","lastTransitionTime":"2026-01-09T13:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.902330 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.915831 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.929574 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.946001 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.964514 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.989476 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.990803 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.990859 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.990878 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.990904 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:50 crc kubenswrapper[4919]: I0109 13:30:50.990924 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:50Z","lastTransitionTime":"2026-01-09T13:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.008247 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.011200 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.018315 4919 generic.go:334] "Generic (PLEG): container finished" podID="21befbc8-9e98-4557-89af-a116cc8c484c" containerID="458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322" exitCode=0 Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.018392 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" event={"ID":"21befbc8-9e98-4557-89af-a116cc8c484c","Type":"ContainerDied","Data":"458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.054552 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.069561 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.087831 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.104533 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.104580 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.104590 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.104607 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.104619 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:51Z","lastTransitionTime":"2026-01-09T13:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.104758 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.123146 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.141946 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.159577 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.177671 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.195642 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.207958 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.208014 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.208032 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.208058 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.208071 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:51Z","lastTransitionTime":"2026-01-09T13:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.216369 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.235840 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.251152 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.277176 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.310986 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.311055 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.311069 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.311092 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.311103 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:51Z","lastTransitionTime":"2026-01-09T13:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.316848 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.414593 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.415111 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.415126 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.415149 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.415164 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:51Z","lastTransitionTime":"2026-01-09T13:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.518354 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.518408 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.518418 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.518439 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.518451 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:51Z","lastTransitionTime":"2026-01-09T13:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.622496 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.622567 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.622588 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.622618 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.622640 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:51Z","lastTransitionTime":"2026-01-09T13:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.726921 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.726971 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.726984 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.727003 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.727018 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:51Z","lastTransitionTime":"2026-01-09T13:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.751779 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:51 crc kubenswrapper[4919]: E0109 13:30:51.751989 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.752135 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:51 crc kubenswrapper[4919]: E0109 13:30:51.752338 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.830388 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.830450 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.830463 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.830487 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.830504 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:51Z","lastTransitionTime":"2026-01-09T13:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.934099 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.934194 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.934270 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.934312 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:51 crc kubenswrapper[4919]: I0109 13:30:51.934344 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:51Z","lastTransitionTime":"2026-01-09T13:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.028666 4919 generic.go:334] "Generic (PLEG): container finished" podID="21befbc8-9e98-4557-89af-a116cc8c484c" containerID="d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb" exitCode=0 Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.028757 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" event={"ID":"21befbc8-9e98-4557-89af-a116cc8c484c","Type":"ContainerDied","Data":"d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.037563 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.037626 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.037652 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.037684 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.037735 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:52Z","lastTransitionTime":"2026-01-09T13:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.058373 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.080312 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.108400 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.125734 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.142228 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.142281 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.142292 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.142311 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.142323 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:52Z","lastTransitionTime":"2026-01-09T13:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.145171 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.161920 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.174948 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.186163 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.203735 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.218509 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.236116 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.245483 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.245545 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.245565 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.245592 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.245610 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:52Z","lastTransitionTime":"2026-01-09T13:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.258675 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.282168 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.299335 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:52Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.348076 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.348138 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.348148 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.348167 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.348178 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:52Z","lastTransitionTime":"2026-01-09T13:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.451194 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.451256 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.451268 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.451290 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.451305 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:52Z","lastTransitionTime":"2026-01-09T13:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.469102 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.469279 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:31:00.469250107 +0000 UTC m=+40.017089567 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.469673 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.469773 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.469833 4919 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.469879 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:00.469869382 +0000 UTC m=+40.017708842 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.469943 4919 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.469991 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:00.469978044 +0000 UTC m=+40.017817504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.554838 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.554903 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.554921 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.554950 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.554968 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:52Z","lastTransitionTime":"2026-01-09T13:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.570554 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.570683 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.570831 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.570881 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.570903 4919 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.571002 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:00.570971343 +0000 UTC m=+40.118810833 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.571005 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.571068 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.571100 4919 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.571291 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:00.571185408 +0000 UTC m=+40.119025038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.658437 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.658522 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.658541 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.658571 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.658592 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:52Z","lastTransitionTime":"2026-01-09T13:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.751917 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:52 crc kubenswrapper[4919]: E0109 13:30:52.752139 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.760941 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.761005 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.761024 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.761050 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.761068 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:52Z","lastTransitionTime":"2026-01-09T13:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.864820 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.864891 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.864920 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.864953 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.864974 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:52Z","lastTransitionTime":"2026-01-09T13:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.972286 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.972342 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.972365 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.972389 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:52 crc kubenswrapper[4919]: I0109 13:30:52.972402 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:52Z","lastTransitionTime":"2026-01-09T13:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.085628 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.086007 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.086207 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.086479 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.086679 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:53Z","lastTransitionTime":"2026-01-09T13:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.189876 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.189925 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.189940 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.189987 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.190004 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:53Z","lastTransitionTime":"2026-01-09T13:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.292934 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.292991 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.293003 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.293027 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.293042 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:53Z","lastTransitionTime":"2026-01-09T13:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.395829 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.395876 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.395889 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.395907 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.395920 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:53Z","lastTransitionTime":"2026-01-09T13:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.500508 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.500580 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.500601 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.500633 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.500653 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:53Z","lastTransitionTime":"2026-01-09T13:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.603744 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.603784 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.603793 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.603815 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.603824 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:53Z","lastTransitionTime":"2026-01-09T13:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.706825 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.706867 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.706877 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.706895 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.706907 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:53Z","lastTransitionTime":"2026-01-09T13:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.751021 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.751081 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:53 crc kubenswrapper[4919]: E0109 13:30:53.751170 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:30:53 crc kubenswrapper[4919]: E0109 13:30:53.751325 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.809472 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.809524 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.809536 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.809556 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.809570 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:53Z","lastTransitionTime":"2026-01-09T13:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.912534 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.912613 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.912632 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.912669 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:53 crc kubenswrapper[4919]: I0109 13:30:53.912690 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:53Z","lastTransitionTime":"2026-01-09T13:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.015929 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.016016 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.016044 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.016077 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.016102 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:54Z","lastTransitionTime":"2026-01-09T13:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.041609 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.042289 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.042416 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.048099 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" event={"ID":"21befbc8-9e98-4557-89af-a116cc8c484c","Type":"ContainerStarted","Data":"e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.066162 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.120633 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.120676 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.120694 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.120717 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.120731 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:54Z","lastTransitionTime":"2026-01-09T13:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.120883 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.122113 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.124038 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.143852 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.162994 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.179330 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.195677 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.216638 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.224249 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.224473 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.224651 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.224875 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.225067 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:54Z","lastTransitionTime":"2026-01-09T13:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.240042 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.259340 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.274845 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.297075 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.309921 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.327864 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.328877 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.328951 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.328970 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.328996 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.329015 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:54Z","lastTransitionTime":"2026-01-09T13:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.345090 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.372977 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.388422 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.403670 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.427701 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.432255 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.432317 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.432339 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.432369 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.432390 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:54Z","lastTransitionTime":"2026-01-09T13:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.443990 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.470800 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.492849 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.511486 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.535776 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.535852 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.535871 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.535902 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.535923 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:54Z","lastTransitionTime":"2026-01-09T13:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.540002 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.560342 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.578529 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.595162 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.616709 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.635356 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.638715 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.638780 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.638801 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.638829 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.638848 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:54Z","lastTransitionTime":"2026-01-09T13:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.742832 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.742900 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.742921 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.742947 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.742962 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:54Z","lastTransitionTime":"2026-01-09T13:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.751344 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:54 crc kubenswrapper[4919]: E0109 13:30:54.751584 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.807535 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.825322 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.841725 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.845816 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.845861 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.845875 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.845893 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.845911 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:54Z","lastTransitionTime":"2026-01-09T13:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.856190 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.866994 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.888132 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.904531 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.918750 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.932502 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.948423 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.948487 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.948508 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.948536 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.948557 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:54Z","lastTransitionTime":"2026-01-09T13:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.948887 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.967523 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:54 crc kubenswrapper[4919]: I0109 13:30:54.991917 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.022480 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:55Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.043570 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:55Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.050576 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.050628 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.050646 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.050669 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.050687 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:55Z","lastTransitionTime":"2026-01-09T13:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.051044 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.056579 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:55Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.155630 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.155679 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.155694 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.155720 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.155766 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:55Z","lastTransitionTime":"2026-01-09T13:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.259705 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.259771 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.259788 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.259815 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.259834 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:55Z","lastTransitionTime":"2026-01-09T13:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.363339 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.363410 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.363430 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.363459 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.363480 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:55Z","lastTransitionTime":"2026-01-09T13:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.466472 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.466532 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.466550 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.466575 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.466588 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:55Z","lastTransitionTime":"2026-01-09T13:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.569916 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.570431 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.570587 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.570731 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.570851 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:55Z","lastTransitionTime":"2026-01-09T13:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.674607 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.674664 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.674682 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.674708 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.674732 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:55Z","lastTransitionTime":"2026-01-09T13:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.751391 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.751495 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:55 crc kubenswrapper[4919]: E0109 13:30:55.751656 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:30:55 crc kubenswrapper[4919]: E0109 13:30:55.751828 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.777677 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.777724 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.777742 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.777767 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.777785 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:55Z","lastTransitionTime":"2026-01-09T13:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.881327 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.881678 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.881814 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.882000 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.882116 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:55Z","lastTransitionTime":"2026-01-09T13:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.986958 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.987011 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.987036 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.987072 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:55 crc kubenswrapper[4919]: I0109 13:30:55.987090 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:55Z","lastTransitionTime":"2026-01-09T13:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.055712 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.090378 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.090429 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.090449 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.090476 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.090497 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:56Z","lastTransitionTime":"2026-01-09T13:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.193270 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.193623 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.193743 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.193865 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.193977 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:56Z","lastTransitionTime":"2026-01-09T13:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.298236 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.298294 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.298308 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.298329 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.298348 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:56Z","lastTransitionTime":"2026-01-09T13:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.401867 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.401938 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.401956 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.401987 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.402006 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:56Z","lastTransitionTime":"2026-01-09T13:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.505111 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.505193 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.505242 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.505273 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.505294 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:56Z","lastTransitionTime":"2026-01-09T13:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.609132 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.609181 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.609193 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.609226 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.609243 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:56Z","lastTransitionTime":"2026-01-09T13:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.712397 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.712445 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.712467 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.712487 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.712498 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:56Z","lastTransitionTime":"2026-01-09T13:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.751367 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:56 crc kubenswrapper[4919]: E0109 13:30:56.751607 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.815816 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.815858 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.815871 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.815890 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.815905 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:56Z","lastTransitionTime":"2026-01-09T13:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.919256 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.919337 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.919357 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.919385 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:56 crc kubenswrapper[4919]: I0109 13:30:56.919408 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:56Z","lastTransitionTime":"2026-01-09T13:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.023132 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.023305 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.023328 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.023356 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.023374 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.038015 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.038060 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.038073 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.038097 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.038115 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: E0109 13:30:57.061824 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.063052 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/0.log" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.066697 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c" exitCode=1 Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.066753 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.067669 4919 scope.go:117] "RemoveContainer" containerID="e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.068772 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.068840 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.068860 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.068886 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.068906 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.091832 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: E0109 13:30:57.097064 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.109025 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.109082 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.109101 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.109132 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.109154 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.119025 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: E0109 13:30:57.132398 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.137901 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.138075 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.138242 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.138375 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.138496 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.146035 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: E0109 13:30:57.160629 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.167120 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.167193 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.167242 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.167278 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.167299 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.169310 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.190415 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: E0109 13:30:57.193315 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: E0109 13:30:57.193673 4919 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.196352 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.196410 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.196430 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.196456 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.196478 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.209916 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.224399 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.245839 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.273886 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.300663 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.300925 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.300982 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.300994 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.301016 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.301033 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.320699 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.348383 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:56Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0109 13:30:55.459271 6269 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459309 6269 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459337 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:30:55.459391 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:30:55.459439 6269 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 13:30:55.459481 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:30:55.459505 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:30:55.459917 6269 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.460045 6269 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459456 6269 factory.go:656] Stopping watch factory\\\\nI0109 13:30:55.460370 6269 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.364354 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.384060 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:57Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.404645 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.404708 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.404723 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.404746 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.404761 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.510438 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.510497 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.510531 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.510559 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.510579 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.614704 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.615268 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.615296 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.615337 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.615360 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.718675 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.719265 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.719522 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.719681 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.719868 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.751567 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.751583 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:57 crc kubenswrapper[4919]: E0109 13:30:57.751781 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:30:57 crc kubenswrapper[4919]: E0109 13:30:57.752035 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.823887 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.823934 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.823947 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.823973 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.823988 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.927190 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.927504 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.927583 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.927666 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:57 crc kubenswrapper[4919]: I0109 13:30:57.927749 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:57Z","lastTransitionTime":"2026-01-09T13:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.031329 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.031378 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.031389 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.031406 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.031417 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:58Z","lastTransitionTime":"2026-01-09T13:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.075534 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/0.log" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.087923 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.088367 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.112551 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.135278 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.135579 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.135669 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.135758 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.135839 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:58Z","lastTransitionTime":"2026-01-09T13:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.143744 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:56Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0109 13:30:55.459271 6269 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459309 6269 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459337 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:30:55.459391 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:30:55.459439 6269 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 13:30:55.459481 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:30:55.459505 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:30:55.459917 6269 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.460045 6269 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459456 6269 factory.go:656] Stopping watch factory\\\\nI0109 13:30:55.460370 6269 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.169198 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.189780 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.217024 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.234775 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.238965 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.239015 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.239028 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.239058 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.239072 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:58Z","lastTransitionTime":"2026-01-09T13:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.259883 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.274095 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.293072 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.305053 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.326733 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.342063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.342118 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.342133 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.342159 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.342176 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:58Z","lastTransitionTime":"2026-01-09T13:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.350027 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.367990 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.386675 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.445190 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.445279 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.445298 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.445325 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.445347 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:58Z","lastTransitionTime":"2026-01-09T13:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.549385 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.549436 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.549452 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.549478 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.549495 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:58Z","lastTransitionTime":"2026-01-09T13:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.652810 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.652954 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.652968 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.652990 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.653003 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:58Z","lastTransitionTime":"2026-01-09T13:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.751407 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:30:58 crc kubenswrapper[4919]: E0109 13:30:58.752274 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.756026 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.756083 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.756097 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.756120 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.756133 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:58Z","lastTransitionTime":"2026-01-09T13:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.859423 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.859496 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.859516 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.859548 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.859573 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:58Z","lastTransitionTime":"2026-01-09T13:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.963655 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.963728 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.963751 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.963783 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:58 crc kubenswrapper[4919]: I0109 13:30:58.963806 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:58Z","lastTransitionTime":"2026-01-09T13:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.033790 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l"] Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.034797 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.038950 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.041159 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.060736 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.067184 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.067270 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.067287 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.067314 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.067337 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:59Z","lastTransitionTime":"2026-01-09T13:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.073387 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7a361336-2125-49a9-8332-eb66286dcdb2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.073622 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7a361336-2125-49a9-8332-eb66286dcdb2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.073820 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slm6g\" (UniqueName: \"kubernetes.io/projected/7a361336-2125-49a9-8332-eb66286dcdb2-kube-api-access-slm6g\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.074119 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7a361336-2125-49a9-8332-eb66286dcdb2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.084884 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.113389 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.140696 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:56Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0109 13:30:55.459271 6269 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459309 6269 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459337 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:30:55.459391 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:30:55.459439 6269 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 13:30:55.459481 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:30:55.459505 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:30:55.459917 6269 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.460045 6269 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459456 6269 factory.go:656] Stopping watch factory\\\\nI0109 13:30:55.460370 6269 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.161428 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.170521 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.170594 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.170614 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.170646 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.170667 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:59Z","lastTransitionTime":"2026-01-09T13:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.175199 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slm6g\" (UniqueName: \"kubernetes.io/projected/7a361336-2125-49a9-8332-eb66286dcdb2-kube-api-access-slm6g\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.175325 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7a361336-2125-49a9-8332-eb66286dcdb2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.175377 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7a361336-2125-49a9-8332-eb66286dcdb2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.175418 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7a361336-2125-49a9-8332-eb66286dcdb2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.176567 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7a361336-2125-49a9-8332-eb66286dcdb2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.176861 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7a361336-2125-49a9-8332-eb66286dcdb2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.179893 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.184537 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7a361336-2125-49a9-8332-eb66286dcdb2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.204733 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.207690 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slm6g\" (UniqueName: \"kubernetes.io/projected/7a361336-2125-49a9-8332-eb66286dcdb2-kube-api-access-slm6g\") pod \"ovnkube-control-plane-749d76644c-9s49l\" (UID: \"7a361336-2125-49a9-8332-eb66286dcdb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.229897 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.251782 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.271580 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.274274 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.274345 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.274371 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.274398 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.274420 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:59Z","lastTransitionTime":"2026-01-09T13:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.296865 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.321533 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.340638 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.356631 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.361792 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.378163 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.378351 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.378689 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.378834 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.378920 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:59Z","lastTransitionTime":"2026-01-09T13:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:59 crc kubenswrapper[4919]: W0109 13:30:59.378851 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a361336_2125_49a9_8332_eb66286dcdb2.slice/crio-cba109beefef2cfa920417ef4e04294bd388e3bb60619ca87db54e0dbc29114e WatchSource:0}: Error finding container cba109beefef2cfa920417ef4e04294bd388e3bb60619ca87db54e0dbc29114e: Status 404 returned error can't find the container with id cba109beefef2cfa920417ef4e04294bd388e3bb60619ca87db54e0dbc29114e Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.385869 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:59Z is after 2025-08-24T17:21:41Z" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.483934 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.484014 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.484037 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.484069 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.484089 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:59Z","lastTransitionTime":"2026-01-09T13:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.588090 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.588166 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.588190 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.588266 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.588293 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:59Z","lastTransitionTime":"2026-01-09T13:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.692740 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.692840 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.692863 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.692926 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.692949 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:59Z","lastTransitionTime":"2026-01-09T13:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.751531 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.751578 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:30:59 crc kubenswrapper[4919]: E0109 13:30:59.751775 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:30:59 crc kubenswrapper[4919]: E0109 13:30:59.751939 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.800589 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.800668 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.800690 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.800724 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.800749 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:59Z","lastTransitionTime":"2026-01-09T13:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.905852 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.905925 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.905946 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.905976 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:30:59 crc kubenswrapper[4919]: I0109 13:30:59.905997 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:30:59Z","lastTransitionTime":"2026-01-09T13:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.009875 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.009941 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.009962 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.009995 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.010015 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:00Z","lastTransitionTime":"2026-01-09T13:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.100610 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/1.log" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.101831 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/0.log" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.106290 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19" exitCode=1 Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.106380 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.106427 4919 scope.go:117] "RemoveContainer" containerID="e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.107282 4919 scope.go:117] "RemoveContainer" containerID="6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19" Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.107573 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.108136 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" event={"ID":"7a361336-2125-49a9-8332-eb66286dcdb2","Type":"ContainerStarted","Data":"cba109beefef2cfa920417ef4e04294bd388e3bb60619ca87db54e0dbc29114e"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.112792 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.112856 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.112880 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.112916 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.112942 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:00Z","lastTransitionTime":"2026-01-09T13:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.135443 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.154293 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.172740 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.184841 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xkhdz"] Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.185638 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.185742 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.195408 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.215726 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.215790 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.215808 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.215837 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.215860 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:00Z","lastTransitionTime":"2026-01-09T13:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.223422 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.245238 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.265806 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.291425 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.292075 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrkwv\" (UniqueName: \"kubernetes.io/projected/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-kube-api-access-qrkwv\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.294951 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:56Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0109 13:30:55.459271 6269 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459309 6269 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459337 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:30:55.459391 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:30:55.459439 6269 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 13:30:55.459481 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:30:55.459505 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:30:55.459917 6269 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.460045 6269 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459456 6269 factory.go:656] Stopping watch factory\\\\nI0109 13:30:55.460370 6269 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"terIP:10.217.4.161,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0109 13:30:58.622928 6406 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.314130 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.318185 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.318243 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.318256 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.318277 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.318292 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:00Z","lastTransitionTime":"2026-01-09T13:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.334058 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.354928 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.376041 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.393013 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.393076 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrkwv\" (UniqueName: \"kubernetes.io/projected/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-kube-api-access-qrkwv\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.393289 4919 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.393467 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs podName:7a2e9878-6b0e-4328-a3ca-9f828fb105c9 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:00.893420287 +0000 UTC m=+40.441259857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs") pod "network-metrics-daemon-xkhdz" (UID: "7a2e9878-6b0e-4328-a3ca-9f828fb105c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.396152 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.423101 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.423152 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.423171 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.423196 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.423239 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:00Z","lastTransitionTime":"2026-01-09T13:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.429080 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrkwv\" (UniqueName: \"kubernetes.io/projected/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-kube-api-access-qrkwv\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.434467 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.455370 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.480614 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.493445 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.493754 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.493954 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:31:16.493924454 +0000 UTC m=+56.041763914 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.494042 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.494151 4919 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.494268 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:16.494247342 +0000 UTC m=+56.042086802 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.494320 4919 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.494375 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:16.494364994 +0000 UTC m=+56.042204454 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.494178 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.515300 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.525513 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.525538 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.525548 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.525564 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.525574 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:00Z","lastTransitionTime":"2026-01-09T13:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.537679 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.549720 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.571612 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.582684 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.596424 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.596488 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.596642 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.596694 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.596697 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.596750 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.596764 4919 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.596711 4919 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.596831 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:16.596811827 +0000 UTC m=+56.144651277 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.596893 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:16.596885099 +0000 UTC m=+56.144724539 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.599432 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.614246 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.628110 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.628172 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.628190 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.628237 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.628253 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:00Z","lastTransitionTime":"2026-01-09T13:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.628718 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.652535 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:56Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0109 13:30:55.459271 6269 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459309 6269 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459337 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:30:55.459391 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:30:55.459439 6269 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 13:30:55.459481 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:30:55.459505 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:30:55.459917 6269 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.460045 6269 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459456 6269 factory.go:656] Stopping watch factory\\\\nI0109 13:30:55.460370 6269 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"terIP:10.217.4.161,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0109 13:30:58.622928 6406 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.669605 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.684806 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.702955 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.717169 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.731419 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.731496 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.731510 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.731532 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.731545 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:00Z","lastTransitionTime":"2026-01-09T13:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.741146 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.751485 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.751681 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.771237 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.796398 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.825066 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.836313 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.836374 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.836391 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.836418 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.836441 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:00Z","lastTransitionTime":"2026-01-09T13:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.846970 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.866814 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.884660 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.900664 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.901550 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.901934 4919 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: E0109 13:31:00.902051 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs podName:7a2e9878-6b0e-4328-a3ca-9f828fb105c9 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:01.902024986 +0000 UTC m=+41.449864446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs") pod "network-metrics-daemon-xkhdz" (UID: "7a2e9878-6b0e-4328-a3ca-9f828fb105c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.919572 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.937148 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.939087 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.939152 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.939174 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.939201 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.939245 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:00Z","lastTransitionTime":"2026-01-09T13:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.958809 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:00 crc kubenswrapper[4919]: I0109 13:31:00.978873 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.008650 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:56Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0109 13:30:55.459271 6269 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459309 6269 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459337 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:30:55.459391 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:30:55.459439 6269 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 13:30:55.459481 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:30:55.459505 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:30:55.459917 6269 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.460045 6269 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459456 6269 factory.go:656] Stopping watch factory\\\\nI0109 13:30:55.460370 6269 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"terIP:10.217.4.161,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0109 13:30:58.622928 6406 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.028619 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.044254 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.044313 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.044336 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.044366 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.044386 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:01Z","lastTransitionTime":"2026-01-09T13:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.046338 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.066822 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.088712 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.114617 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" event={"ID":"7a361336-2125-49a9-8332-eb66286dcdb2","Type":"ContainerStarted","Data":"6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.115064 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" event={"ID":"7a361336-2125-49a9-8332-eb66286dcdb2","Type":"ContainerStarted","Data":"108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.117067 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/1.log" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.130383 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.148771 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.148855 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.148881 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.148917 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.148946 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:01Z","lastTransitionTime":"2026-01-09T13:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.153381 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.171840 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.191847 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.214420 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.240785 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:56Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0109 13:30:55.459271 6269 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459309 6269 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459337 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:30:55.459391 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:30:55.459439 6269 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 13:30:55.459481 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:30:55.459505 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:30:55.459917 6269 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.460045 6269 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459456 6269 factory.go:656] Stopping watch factory\\\\nI0109 13:30:55.460370 6269 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"terIP:10.217.4.161,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0109 13:30:58.622928 6406 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.252431 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.252477 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.252490 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.252511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.252526 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:01Z","lastTransitionTime":"2026-01-09T13:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.262180 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.280512 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.299852 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.315517 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.337672 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.357490 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.357626 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.357664 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.357702 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.357723 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:01Z","lastTransitionTime":"2026-01-09T13:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.366901 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.386187 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.403301 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.419527 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.438366 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.462475 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.462755 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.462775 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.462806 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.462829 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:01Z","lastTransitionTime":"2026-01-09T13:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.566739 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.566786 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.566799 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.566816 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.566827 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:01Z","lastTransitionTime":"2026-01-09T13:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.670465 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.670544 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.670860 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.670894 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.670905 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:01Z","lastTransitionTime":"2026-01-09T13:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.751731 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.751831 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:01 crc kubenswrapper[4919]: E0109 13:31:01.751892 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:01 crc kubenswrapper[4919]: E0109 13:31:01.752032 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.751750 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:01 crc kubenswrapper[4919]: E0109 13:31:01.752177 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.774567 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.774603 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.774647 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.774671 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.774686 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:01Z","lastTransitionTime":"2026-01-09T13:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.877999 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.878045 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.878055 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.878123 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.878137 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:01Z","lastTransitionTime":"2026-01-09T13:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.912457 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:01 crc kubenswrapper[4919]: E0109 13:31:01.912695 4919 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:01 crc kubenswrapper[4919]: E0109 13:31:01.912791 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs podName:7a2e9878-6b0e-4328-a3ca-9f828fb105c9 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:03.91276844 +0000 UTC m=+43.460607890 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs") pod "network-metrics-daemon-xkhdz" (UID: "7a2e9878-6b0e-4328-a3ca-9f828fb105c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.981535 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.981623 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.981641 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.981665 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:01 crc kubenswrapper[4919]: I0109 13:31:01.981685 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:01Z","lastTransitionTime":"2026-01-09T13:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.087538 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.087609 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.087634 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.087670 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.087694 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:02Z","lastTransitionTime":"2026-01-09T13:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.191436 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.191506 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.191525 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.191557 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.191578 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:02Z","lastTransitionTime":"2026-01-09T13:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.294725 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.294800 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.294820 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.294849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.294868 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:02Z","lastTransitionTime":"2026-01-09T13:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.398745 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.398817 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.398842 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.398880 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.398904 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:02Z","lastTransitionTime":"2026-01-09T13:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.502751 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.502818 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.502837 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.502869 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.502893 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:02Z","lastTransitionTime":"2026-01-09T13:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.606479 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.606554 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.606575 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.606604 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.606623 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:02Z","lastTransitionTime":"2026-01-09T13:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.710755 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.710807 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.710820 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.710840 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.710854 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:02Z","lastTransitionTime":"2026-01-09T13:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.751640 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:02 crc kubenswrapper[4919]: E0109 13:31:02.751877 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.814439 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.814527 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.814546 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.814580 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.814600 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:02Z","lastTransitionTime":"2026-01-09T13:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.917904 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.917972 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.917993 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.918019 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:02 crc kubenswrapper[4919]: I0109 13:31:02.918037 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:02Z","lastTransitionTime":"2026-01-09T13:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.021972 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.022055 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.022080 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.022120 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.022147 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:03Z","lastTransitionTime":"2026-01-09T13:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.126523 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.126610 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.126640 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.126674 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.126698 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:03Z","lastTransitionTime":"2026-01-09T13:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.230954 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.231039 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.231067 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.231108 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.231135 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:03Z","lastTransitionTime":"2026-01-09T13:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.333968 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.334385 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.334655 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.334904 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.335087 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:03Z","lastTransitionTime":"2026-01-09T13:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.438321 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.438394 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.438414 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.438458 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.438483 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:03Z","lastTransitionTime":"2026-01-09T13:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.542328 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.542395 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.542414 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.542443 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.542463 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:03Z","lastTransitionTime":"2026-01-09T13:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.646397 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.646851 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.647051 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.647259 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.647458 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:03Z","lastTransitionTime":"2026-01-09T13:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.758261 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.758413 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.758314 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.758684 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.758771 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.758809 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.758844 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.758869 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:03Z","lastTransitionTime":"2026-01-09T13:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:03 crc kubenswrapper[4919]: E0109 13:31:03.758884 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:03 crc kubenswrapper[4919]: E0109 13:31:03.760020 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:03 crc kubenswrapper[4919]: E0109 13:31:03.760142 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.862911 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.862948 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.862960 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.862976 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.862988 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:03Z","lastTransitionTime":"2026-01-09T13:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.939608 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:03 crc kubenswrapper[4919]: E0109 13:31:03.939920 4919 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:03 crc kubenswrapper[4919]: E0109 13:31:03.941572 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs podName:7a2e9878-6b0e-4328-a3ca-9f828fb105c9 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:07.941541511 +0000 UTC m=+47.489380961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs") pod "network-metrics-daemon-xkhdz" (UID: "7a2e9878-6b0e-4328-a3ca-9f828fb105c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.966191 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.966542 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.966705 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.966842 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:03 crc kubenswrapper[4919]: I0109 13:31:03.966959 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:03Z","lastTransitionTime":"2026-01-09T13:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.070912 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.070989 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.071008 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.071041 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.071062 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:04Z","lastTransitionTime":"2026-01-09T13:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.174622 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.174691 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.174709 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.174744 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.174767 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:04Z","lastTransitionTime":"2026-01-09T13:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.278424 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.278503 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.278523 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.278557 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.278578 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:04Z","lastTransitionTime":"2026-01-09T13:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.381930 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.382027 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.382047 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.382083 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.382104 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:04Z","lastTransitionTime":"2026-01-09T13:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.485793 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.485865 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.485887 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.485916 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.485935 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:04Z","lastTransitionTime":"2026-01-09T13:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.589354 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.589428 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.589448 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.589476 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.589501 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:04Z","lastTransitionTime":"2026-01-09T13:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.692804 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.692891 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.692911 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.693002 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.693037 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:04Z","lastTransitionTime":"2026-01-09T13:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.752513 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:04 crc kubenswrapper[4919]: E0109 13:31:04.752717 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.796922 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.797003 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.797031 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.797065 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.797094 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:04Z","lastTransitionTime":"2026-01-09T13:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.900242 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.900309 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.900328 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.900353 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:04 crc kubenswrapper[4919]: I0109 13:31:04.900371 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:04Z","lastTransitionTime":"2026-01-09T13:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.003960 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.004015 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.004034 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.004058 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.004075 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:05Z","lastTransitionTime":"2026-01-09T13:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.107702 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.107745 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.107763 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.107788 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.107805 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:05Z","lastTransitionTime":"2026-01-09T13:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.211197 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.211702 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.211845 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.212032 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.212199 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:05Z","lastTransitionTime":"2026-01-09T13:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.316367 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.316458 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.316484 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.316518 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.316543 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:05Z","lastTransitionTime":"2026-01-09T13:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.421792 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.421855 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.421872 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.421902 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.421921 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:05Z","lastTransitionTime":"2026-01-09T13:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.525528 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.525604 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.525626 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.525658 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.525680 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:05Z","lastTransitionTime":"2026-01-09T13:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.629102 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.629186 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.629205 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.629267 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.629288 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:05Z","lastTransitionTime":"2026-01-09T13:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.733107 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.733160 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.733177 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.733201 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.733256 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:05Z","lastTransitionTime":"2026-01-09T13:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.751656 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.751694 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.751793 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:05 crc kubenswrapper[4919]: E0109 13:31:05.751863 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:05 crc kubenswrapper[4919]: E0109 13:31:05.752007 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:05 crc kubenswrapper[4919]: E0109 13:31:05.752158 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.836748 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.836826 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.836851 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.836884 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.836912 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:05Z","lastTransitionTime":"2026-01-09T13:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.940669 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.940724 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.940741 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.940765 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:05 crc kubenswrapper[4919]: I0109 13:31:05.940778 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:05Z","lastTransitionTime":"2026-01-09T13:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.044758 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.044847 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.044872 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.044898 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.044918 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:06Z","lastTransitionTime":"2026-01-09T13:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.148942 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.149572 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.149736 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.149871 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.150005 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:06Z","lastTransitionTime":"2026-01-09T13:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.254396 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.254500 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.254527 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.254561 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.254586 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:06Z","lastTransitionTime":"2026-01-09T13:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.357917 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.357985 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.358010 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.358040 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.358061 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:06Z","lastTransitionTime":"2026-01-09T13:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.461659 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.461731 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.461743 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.461770 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.461788 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:06Z","lastTransitionTime":"2026-01-09T13:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.565291 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.565741 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.565901 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.566050 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.566190 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:06Z","lastTransitionTime":"2026-01-09T13:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.670403 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.670462 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.670479 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.670507 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.670524 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:06Z","lastTransitionTime":"2026-01-09T13:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.750973 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:06 crc kubenswrapper[4919]: E0109 13:31:06.751366 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.773997 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.774066 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.774087 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.774115 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.774139 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:06Z","lastTransitionTime":"2026-01-09T13:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.877986 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.878080 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.878104 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.878142 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.878169 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:06Z","lastTransitionTime":"2026-01-09T13:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.982101 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.982155 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.982173 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.982241 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:06 crc kubenswrapper[4919]: I0109 13:31:06.982262 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:06Z","lastTransitionTime":"2026-01-09T13:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.086333 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.087115 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.087274 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.087387 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.087521 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.191653 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.192315 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.192711 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.193056 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.193255 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.297446 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.297495 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.297513 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.297539 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.297556 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.364866 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.365263 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.365356 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.365438 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.365511 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.386045 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:07Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.392140 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.392198 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.392245 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.392273 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.392294 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.411406 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:07Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.417136 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.417258 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.417290 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.417328 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.417356 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.431150 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:07Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.436386 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.436535 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.436714 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.436888 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.436965 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.458801 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:07Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.470843 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.471587 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.471616 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.471652 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.471674 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.493758 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:07Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.493933 4919 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.497055 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.497152 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.497176 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.497242 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.497271 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.600813 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.600897 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.600921 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.600958 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.600979 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.704520 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.704563 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.704576 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.704595 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.704608 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.750934 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.751096 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.751401 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.751405 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.751565 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.751631 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.807947 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.808009 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.808023 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.808042 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.808056 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.910670 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.910715 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.910726 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.910745 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.910757 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:07Z","lastTransitionTime":"2026-01-09T13:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:07 crc kubenswrapper[4919]: I0109 13:31:07.994002 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.994291 4919 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:07 crc kubenswrapper[4919]: E0109 13:31:07.994417 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs podName:7a2e9878-6b0e-4328-a3ca-9f828fb105c9 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:15.994379785 +0000 UTC m=+55.542219275 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs") pod "network-metrics-daemon-xkhdz" (UID: "7a2e9878-6b0e-4328-a3ca-9f828fb105c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.013587 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.013910 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.014110 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.014452 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.014649 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:08Z","lastTransitionTime":"2026-01-09T13:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.118297 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.119299 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.119449 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.119617 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.119748 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:08Z","lastTransitionTime":"2026-01-09T13:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.222400 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.222866 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.223009 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.223144 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.223326 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:08Z","lastTransitionTime":"2026-01-09T13:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.327536 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.327627 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.327652 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.327688 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.327717 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:08Z","lastTransitionTime":"2026-01-09T13:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.430928 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.431001 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.431023 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.431054 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.431076 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:08Z","lastTransitionTime":"2026-01-09T13:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.534354 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.534438 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.534460 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.534487 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.534506 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:08Z","lastTransitionTime":"2026-01-09T13:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.639790 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.639840 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.639853 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.639875 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.639889 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:08Z","lastTransitionTime":"2026-01-09T13:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.742335 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.742410 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.742435 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.742472 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.742498 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:08Z","lastTransitionTime":"2026-01-09T13:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.751682 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:08 crc kubenswrapper[4919]: E0109 13:31:08.752121 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.845834 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.845896 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.845916 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.845942 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.845963 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:08Z","lastTransitionTime":"2026-01-09T13:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.948903 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.948974 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.949000 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.949035 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:08 crc kubenswrapper[4919]: I0109 13:31:08.949058 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:08Z","lastTransitionTime":"2026-01-09T13:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.053091 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.053145 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.053155 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.053175 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.053187 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:09Z","lastTransitionTime":"2026-01-09T13:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.156291 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.156387 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.156413 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.156449 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.156476 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:09Z","lastTransitionTime":"2026-01-09T13:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.260331 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.260398 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.260420 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.260451 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.260468 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:09Z","lastTransitionTime":"2026-01-09T13:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.363567 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.363624 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.363635 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.363659 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.363674 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:09Z","lastTransitionTime":"2026-01-09T13:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.466306 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.466349 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.466358 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.466380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.466389 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:09Z","lastTransitionTime":"2026-01-09T13:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.569862 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.569926 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.569943 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.569970 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.569987 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:09Z","lastTransitionTime":"2026-01-09T13:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.672606 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.673265 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.673445 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.673617 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.673758 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:09Z","lastTransitionTime":"2026-01-09T13:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.750771 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.750782 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:09 crc kubenswrapper[4919]: E0109 13:31:09.750960 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.750903 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:09 crc kubenswrapper[4919]: E0109 13:31:09.751248 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:09 crc kubenswrapper[4919]: E0109 13:31:09.751759 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.776795 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.776985 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.777004 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.777024 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.777038 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:09Z","lastTransitionTime":"2026-01-09T13:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.881991 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.882461 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.882607 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.882757 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.882886 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:09Z","lastTransitionTime":"2026-01-09T13:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.985893 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.985981 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.985997 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.986026 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:09 crc kubenswrapper[4919]: I0109 13:31:09.986044 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:09Z","lastTransitionTime":"2026-01-09T13:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.089969 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.090424 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.090679 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.090832 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.090987 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:10Z","lastTransitionTime":"2026-01-09T13:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.195010 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.195077 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.195096 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.195121 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.195141 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:10Z","lastTransitionTime":"2026-01-09T13:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.298705 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.298774 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.298792 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.298819 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.298839 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:10Z","lastTransitionTime":"2026-01-09T13:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.401849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.401909 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.401930 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.401955 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.401974 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:10Z","lastTransitionTime":"2026-01-09T13:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.505293 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.505634 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.505795 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.505944 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.506085 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:10Z","lastTransitionTime":"2026-01-09T13:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.608993 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.609049 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.609065 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.609086 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.609100 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:10Z","lastTransitionTime":"2026-01-09T13:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.711999 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.712086 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.712099 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.712118 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.712129 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:10Z","lastTransitionTime":"2026-01-09T13:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.751293 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:10 crc kubenswrapper[4919]: E0109 13:31:10.751829 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.752287 4919 scope.go:117] "RemoveContainer" containerID="6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.781167 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.801021 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.814432 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.814467 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.814476 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.814492 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.814531 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:10Z","lastTransitionTime":"2026-01-09T13:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.817020 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.838423 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.854768 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.871819 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.889703 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.913414 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.917090 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.917140 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.917153 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.917174 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.917189 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:10Z","lastTransitionTime":"2026-01-09T13:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.940541 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e66a4c2e4be4b07e8aeb532d126abdb54dd09601f18951bc49d72e862002564c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:56Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0109 13:30:55.459271 6269 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459309 6269 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459337 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:30:55.459391 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:30:55.459439 6269 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 13:30:55.459481 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:30:55.459505 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:30:55.459917 6269 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.460045 6269 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 13:30:55.459456 6269 factory.go:656] Stopping watch factory\\\\nI0109 13:30:55.460370 6269 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"terIP:10.217.4.161,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0109 13:30:58.622928 6406 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.952629 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.968721 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:10 crc kubenswrapper[4919]: I0109 13:31:10.988855 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.011819 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.021985 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.022039 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.022056 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.022084 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.022102 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:11Z","lastTransitionTime":"2026-01-09T13:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.029581 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.042799 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.060159 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.073521 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.088804 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.109466 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.124776 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.124809 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.124849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.124870 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.124881 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:11Z","lastTransitionTime":"2026-01-09T13:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.128988 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.156557 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.164394 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/1.log" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.167377 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae"} Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.167509 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.177165 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.199664 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.218520 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.227449 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.227487 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.227497 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.227515 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.227527 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:11Z","lastTransitionTime":"2026-01-09T13:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.234932 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.248315 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.259001 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.271149 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.286566 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.299788 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.318231 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"terIP:10.217.4.161,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0109 13:30:58.622928 6406 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.330323 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.330359 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.330369 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.330395 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.330408 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:11Z","lastTransitionTime":"2026-01-09T13:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.330847 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.348619 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.363805 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.377658 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.394333 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.417363 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.433447 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.433500 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.433511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.433529 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.433540 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:11Z","lastTransitionTime":"2026-01-09T13:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.452033 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.468969 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.486154 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.502376 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.523526 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"terIP:10.217.4.161,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0109 13:30:58.622928 6406 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.536656 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.536716 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.536735 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.536763 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.536840 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:11Z","lastTransitionTime":"2026-01-09T13:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.549649 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.571745 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.592116 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.608917 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.625473 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.639787 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.639840 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.639850 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.639871 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.639882 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:11Z","lastTransitionTime":"2026-01-09T13:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.641615 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.742906 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.742973 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.742992 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.743026 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.743048 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:11Z","lastTransitionTime":"2026-01-09T13:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.750935 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:11 crc kubenswrapper[4919]: E0109 13:31:11.751143 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.751271 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:11 crc kubenswrapper[4919]: E0109 13:31:11.751404 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.751476 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:11 crc kubenswrapper[4919]: E0109 13:31:11.751683 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.846019 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.846064 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.846077 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.846095 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.846108 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:11Z","lastTransitionTime":"2026-01-09T13:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.949356 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.949401 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.949410 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.949427 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:11 crc kubenswrapper[4919]: I0109 13:31:11.949438 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:11Z","lastTransitionTime":"2026-01-09T13:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.054327 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.054394 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.054408 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.054430 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.054455 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:12Z","lastTransitionTime":"2026-01-09T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.157180 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.157280 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.157296 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.157321 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.157335 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:12Z","lastTransitionTime":"2026-01-09T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.173125 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/2.log" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.174311 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/1.log" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.177506 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae" exitCode=1 Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.177564 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.177668 4919 scope.go:117] "RemoveContainer" containerID="6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.178858 4919 scope.go:117] "RemoveContainer" containerID="91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae" Jan 09 13:31:12 crc kubenswrapper[4919]: E0109 13:31:12.179087 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.201830 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.220760 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.240962 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.259755 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.259788 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.259798 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.259815 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.259827 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:12Z","lastTransitionTime":"2026-01-09T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.264311 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"terIP:10.217.4.161,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0109 13:30:58.622928 6406 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.277303 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.290370 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.306184 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.321128 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.339756 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.354636 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.367602 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.367673 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.367696 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.367739 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.367766 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:12Z","lastTransitionTime":"2026-01-09T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.378680 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.398575 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.417887 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.430384 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.453313 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.466151 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:12Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.471714 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.471766 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.471785 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.471811 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.471833 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:12Z","lastTransitionTime":"2026-01-09T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.575795 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.575859 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.575872 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.575895 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.575914 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:12Z","lastTransitionTime":"2026-01-09T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.678904 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.678980 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.679000 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.679029 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.679048 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:12Z","lastTransitionTime":"2026-01-09T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.750819 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:12 crc kubenswrapper[4919]: E0109 13:31:12.751015 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.781035 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.781087 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.781099 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.781119 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.781133 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:12Z","lastTransitionTime":"2026-01-09T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.883461 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.883527 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.883539 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.883562 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.883577 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:12Z","lastTransitionTime":"2026-01-09T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.987125 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.987198 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.987248 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.987279 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:12 crc kubenswrapper[4919]: I0109 13:31:12.987301 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:12Z","lastTransitionTime":"2026-01-09T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.091922 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.091973 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.091986 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.092007 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.092021 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:13Z","lastTransitionTime":"2026-01-09T13:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.185919 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/2.log" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.194936 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.194983 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.194999 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.195024 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.195041 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:13Z","lastTransitionTime":"2026-01-09T13:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.297833 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.297891 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.297902 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.297920 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.297935 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:13Z","lastTransitionTime":"2026-01-09T13:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.401848 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.401916 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.401936 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.401966 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.401987 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:13Z","lastTransitionTime":"2026-01-09T13:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.505060 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.505152 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.505170 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.505252 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.505301 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:13Z","lastTransitionTime":"2026-01-09T13:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.609305 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.609374 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.609393 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.609419 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.609439 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:13Z","lastTransitionTime":"2026-01-09T13:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.712444 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.712497 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.712514 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.712539 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.712557 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:13Z","lastTransitionTime":"2026-01-09T13:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.751273 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:13 crc kubenswrapper[4919]: E0109 13:31:13.751519 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.751564 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.751633 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:13 crc kubenswrapper[4919]: E0109 13:31:13.751727 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:13 crc kubenswrapper[4919]: E0109 13:31:13.751916 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.815953 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.816043 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.816065 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.816096 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.816117 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:13Z","lastTransitionTime":"2026-01-09T13:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.840312 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.852240 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.863276 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:13Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.879808 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:13Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.896072 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:13Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.915953 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:13Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.919724 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.919803 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.919826 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.919855 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.919880 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:13Z","lastTransitionTime":"2026-01-09T13:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.935096 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:13Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.955117 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:13Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.972728 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:13Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:13 crc kubenswrapper[4919]: I0109 13:31:13.995906 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:13Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.016317 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.023170 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.023264 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.023284 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.023320 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.023338 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:14Z","lastTransitionTime":"2026-01-09T13:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.050775 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6084dd2d1f52091583766f6a3a12a9852a93e945dc92c0c76c5132192e182b19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"message\\\":\\\"terIP:10.217.4.161,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0109 13:30:58.622928 6406 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:30:58Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.069370 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.090711 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.116580 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.127315 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.127361 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.127382 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.127410 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.127432 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:14Z","lastTransitionTime":"2026-01-09T13:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.139572 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.161376 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.181471 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.229818 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.229890 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.229918 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.230131 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.230154 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:14Z","lastTransitionTime":"2026-01-09T13:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.230428 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.231635 4919 scope.go:117] "RemoveContainer" containerID="91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae" Jan 09 13:31:14 crc kubenswrapper[4919]: E0109 13:31:14.231950 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.259509 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.280041 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.298001 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.321458 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.333393 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.333606 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.333787 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.333948 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.334104 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:14Z","lastTransitionTime":"2026-01-09T13:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.343052 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.366078 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.387701 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.411742 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.430073 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.436789 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.436867 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.436889 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.436930 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.436955 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:14Z","lastTransitionTime":"2026-01-09T13:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.450090 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.475076 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.489873 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.508156 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.528069 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.540285 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.540343 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.540362 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.540471 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.540500 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:14Z","lastTransitionTime":"2026-01-09T13:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.545743 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.566570 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.587866 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:14Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.644360 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.644428 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.644447 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.644473 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.644495 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:14Z","lastTransitionTime":"2026-01-09T13:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.747973 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.748054 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.748073 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.748102 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.748122 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:14Z","lastTransitionTime":"2026-01-09T13:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.751530 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:14 crc kubenswrapper[4919]: E0109 13:31:14.751773 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.851099 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.851178 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.851199 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.851268 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.851295 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:14Z","lastTransitionTime":"2026-01-09T13:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.954048 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.954119 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.954138 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.954169 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:14 crc kubenswrapper[4919]: I0109 13:31:14.954192 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:14Z","lastTransitionTime":"2026-01-09T13:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.057589 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.057638 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.057651 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.057673 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.057687 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:15Z","lastTransitionTime":"2026-01-09T13:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.161041 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.161096 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.161113 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.161138 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.161159 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:15Z","lastTransitionTime":"2026-01-09T13:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.264725 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.264788 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.264805 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.264830 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.264847 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:15Z","lastTransitionTime":"2026-01-09T13:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.368825 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.368904 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.368933 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.368967 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.368994 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:15Z","lastTransitionTime":"2026-01-09T13:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.472174 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.472240 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.472252 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.472270 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.472284 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:15Z","lastTransitionTime":"2026-01-09T13:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.575853 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.575931 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.575955 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.575988 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.576013 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:15Z","lastTransitionTime":"2026-01-09T13:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.680290 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.680346 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.680361 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.680380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.680394 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:15Z","lastTransitionTime":"2026-01-09T13:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.751503 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.751880 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.751737 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:15 crc kubenswrapper[4919]: E0109 13:31:15.752255 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:15 crc kubenswrapper[4919]: E0109 13:31:15.752326 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:15 crc kubenswrapper[4919]: E0109 13:31:15.752580 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.783615 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.783668 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.783687 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.783714 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.783733 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:15Z","lastTransitionTime":"2026-01-09T13:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.887301 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.887361 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.887378 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.887398 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.887411 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:15Z","lastTransitionTime":"2026-01-09T13:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.990037 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.990093 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.990117 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.990150 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:15 crc kubenswrapper[4919]: I0109 13:31:15.990175 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:15Z","lastTransitionTime":"2026-01-09T13:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.001721 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.001832 4919 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.001876 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs podName:7a2e9878-6b0e-4328-a3ca-9f828fb105c9 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:32.001861133 +0000 UTC m=+71.549700583 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs") pod "network-metrics-daemon-xkhdz" (UID: "7a2e9878-6b0e-4328-a3ca-9f828fb105c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.093327 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.093365 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.093375 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.093391 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.093404 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:16Z","lastTransitionTime":"2026-01-09T13:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.197304 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.197362 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.197379 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.197401 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.197418 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:16Z","lastTransitionTime":"2026-01-09T13:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.301050 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.301103 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.301122 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.301149 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.301169 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:16Z","lastTransitionTime":"2026-01-09T13:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.404516 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.405719 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.406824 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.406988 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.407734 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:16Z","lastTransitionTime":"2026-01-09T13:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.507707 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.507963 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.508045 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.508407 4919 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.508511 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:48.50848526 +0000 UTC m=+88.056324750 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.508734 4919 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.508958 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:48.508926351 +0000 UTC m=+88.056765841 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.509141 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:31:48.509121606 +0000 UTC m=+88.056961096 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.510172 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.510261 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.510280 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.510310 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.510329 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:16Z","lastTransitionTime":"2026-01-09T13:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.608828 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.608917 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.609079 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.609098 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.609111 4919 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.609159 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:48.609143113 +0000 UTC m=+88.156982573 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.610145 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.610385 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.610515 4919 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.610708 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 13:31:48.610675951 +0000 UTC m=+88.158515441 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.612715 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.612927 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.613072 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.613261 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.613392 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:16Z","lastTransitionTime":"2026-01-09T13:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.717088 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.717546 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.717729 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.717908 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.718158 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:16Z","lastTransitionTime":"2026-01-09T13:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.751824 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:16 crc kubenswrapper[4919]: E0109 13:31:16.752030 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.822774 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.823923 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.824087 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.824304 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.824439 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:16Z","lastTransitionTime":"2026-01-09T13:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.928158 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.928249 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.928270 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.928299 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:16 crc kubenswrapper[4919]: I0109 13:31:16.928330 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:16Z","lastTransitionTime":"2026-01-09T13:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.031940 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.032385 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.032550 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.032684 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.032805 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.135969 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.136040 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.136063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.136092 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.136112 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.238983 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.239050 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.239068 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.239096 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.239121 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.341480 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.341537 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.341552 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.341573 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.341592 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.444803 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.444871 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.444896 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.444928 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.444953 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.547623 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.547709 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.547742 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.547770 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.547792 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.650535 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.650609 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.650629 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.650655 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.650675 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.750708 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.750799 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.750824 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.750729 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.750861 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.750972 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: E0109 13:31:17.750985 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.751004 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.751072 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: E0109 13:31:17.751192 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:17 crc kubenswrapper[4919]: E0109 13:31:17.751456 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:17 crc kubenswrapper[4919]: E0109 13:31:17.774954 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:17Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.781893 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.781952 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.781979 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.782013 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.782036 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: E0109 13:31:17.801616 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:17Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.808283 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.808336 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.808356 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.808388 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.808411 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: E0109 13:31:17.837483 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:17Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.842824 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.842870 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.842881 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.842902 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.842914 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: E0109 13:31:17.857655 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:17Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.862114 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.862157 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.862175 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.862196 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.862229 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: E0109 13:31:17.878788 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:17Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:17 crc kubenswrapper[4919]: E0109 13:31:17.878912 4919 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.880659 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.880720 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.880733 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.880751 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.880762 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.983461 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.983508 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.983521 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.983541 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:17 crc kubenswrapper[4919]: I0109 13:31:17.983556 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:17Z","lastTransitionTime":"2026-01-09T13:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.085646 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.085689 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.085702 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.085719 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.085731 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:18Z","lastTransitionTime":"2026-01-09T13:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.188545 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.188995 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.189186 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.189391 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.189523 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:18Z","lastTransitionTime":"2026-01-09T13:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.292805 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.292857 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.292870 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.292889 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.292907 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:18Z","lastTransitionTime":"2026-01-09T13:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.396247 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.396321 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.396342 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.396370 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.396391 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:18Z","lastTransitionTime":"2026-01-09T13:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.499703 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.499770 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.499791 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.499820 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.499840 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:18Z","lastTransitionTime":"2026-01-09T13:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.603351 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.603416 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.603435 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.603464 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.603484 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:18Z","lastTransitionTime":"2026-01-09T13:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.706876 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.706950 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.706970 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.707000 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.707026 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:18Z","lastTransitionTime":"2026-01-09T13:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.751178 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:18 crc kubenswrapper[4919]: E0109 13:31:18.751416 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.810495 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.811132 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.811386 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.811596 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.811826 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:18Z","lastTransitionTime":"2026-01-09T13:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.915281 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.915355 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.915383 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.915417 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:18 crc kubenswrapper[4919]: I0109 13:31:18.915442 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:18Z","lastTransitionTime":"2026-01-09T13:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.018926 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.019008 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.019027 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.019061 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.019081 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:19Z","lastTransitionTime":"2026-01-09T13:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.122924 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.122999 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.123021 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.123051 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.123077 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:19Z","lastTransitionTime":"2026-01-09T13:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.226146 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.226247 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.226268 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.226299 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.226320 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:19Z","lastTransitionTime":"2026-01-09T13:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.329323 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.329393 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.329411 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.329436 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.329454 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:19Z","lastTransitionTime":"2026-01-09T13:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.433037 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.433120 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.433138 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.433169 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.433188 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:19Z","lastTransitionTime":"2026-01-09T13:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.535927 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.536008 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.536028 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.536059 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.536080 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:19Z","lastTransitionTime":"2026-01-09T13:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.639703 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.639771 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.639792 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.639819 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.639838 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:19Z","lastTransitionTime":"2026-01-09T13:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.743473 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.743569 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.743589 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.743620 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.743643 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:19Z","lastTransitionTime":"2026-01-09T13:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.750577 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.750654 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.750817 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:19 crc kubenswrapper[4919]: E0109 13:31:19.750994 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:19 crc kubenswrapper[4919]: E0109 13:31:19.751245 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:19 crc kubenswrapper[4919]: E0109 13:31:19.751367 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.846836 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.846905 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.846920 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.846942 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.846957 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:19Z","lastTransitionTime":"2026-01-09T13:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.950343 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.950398 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.950409 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.950429 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:19 crc kubenswrapper[4919]: I0109 13:31:19.950442 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:19Z","lastTransitionTime":"2026-01-09T13:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.054417 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.054481 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.054499 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.054524 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.054546 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:20Z","lastTransitionTime":"2026-01-09T13:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.157685 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.157750 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.157767 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.158281 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.158326 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:20Z","lastTransitionTime":"2026-01-09T13:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.267008 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.267083 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.267102 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.267130 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.267153 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:20Z","lastTransitionTime":"2026-01-09T13:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.370681 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.370731 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.370745 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.370767 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.370783 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:20Z","lastTransitionTime":"2026-01-09T13:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.474562 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.474608 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.474620 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.474639 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.474651 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:20Z","lastTransitionTime":"2026-01-09T13:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.577540 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.577579 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.577591 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.577610 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.577624 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:20Z","lastTransitionTime":"2026-01-09T13:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.681394 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.681452 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.681491 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.681514 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.681528 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:20Z","lastTransitionTime":"2026-01-09T13:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.751298 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:20 crc kubenswrapper[4919]: E0109 13:31:20.751580 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.775924 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.784836 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.784892 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.784911 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.784942 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.784961 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:20Z","lastTransitionTime":"2026-01-09T13:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.801629 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.824602 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.847186 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.867404 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.887313 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.887378 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.887397 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.887425 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.887447 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:20Z","lastTransitionTime":"2026-01-09T13:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.891287 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.908780 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.928624 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.942570 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.968540 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:20 crc kubenswrapper[4919]: I0109 13:31:20.989371 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:20Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.002003 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.002067 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.002110 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.002170 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.002188 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:21Z","lastTransitionTime":"2026-01-09T13:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.004057 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:21Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.036295 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:21Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.051301 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:21Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.070497 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:21Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.090457 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:21Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.105580 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.105640 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.105659 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.105689 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.105710 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:21Z","lastTransitionTime":"2026-01-09T13:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.107840 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:21Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.208635 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.208687 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.208700 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.208720 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.208733 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:21Z","lastTransitionTime":"2026-01-09T13:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.311965 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.312010 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.312021 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.312041 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.312054 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:21Z","lastTransitionTime":"2026-01-09T13:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.415052 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.415105 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.415116 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.415136 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.415150 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:21Z","lastTransitionTime":"2026-01-09T13:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.520610 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.521077 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.521313 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.521535 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.521695 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:21Z","lastTransitionTime":"2026-01-09T13:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.624671 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.624756 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.624769 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.624788 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.624828 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:21Z","lastTransitionTime":"2026-01-09T13:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.727147 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.727265 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.727281 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.727304 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.727321 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:21Z","lastTransitionTime":"2026-01-09T13:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.751373 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.751507 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.751607 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:21 crc kubenswrapper[4919]: E0109 13:31:21.751529 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:21 crc kubenswrapper[4919]: E0109 13:31:21.751781 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:21 crc kubenswrapper[4919]: E0109 13:31:21.751888 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.830390 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.830455 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.830469 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.830486 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.830499 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:21Z","lastTransitionTime":"2026-01-09T13:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.933030 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.933078 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.933087 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.933103 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:21 crc kubenswrapper[4919]: I0109 13:31:21.933115 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:21Z","lastTransitionTime":"2026-01-09T13:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.035998 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.036041 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.036052 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.036068 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.036078 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:22Z","lastTransitionTime":"2026-01-09T13:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.139132 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.139239 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.139262 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.139290 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.139311 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:22Z","lastTransitionTime":"2026-01-09T13:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.241532 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.241605 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.241624 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.241656 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.241678 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:22Z","lastTransitionTime":"2026-01-09T13:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.344922 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.344993 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.345011 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.345045 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.345066 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:22Z","lastTransitionTime":"2026-01-09T13:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.448838 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.448896 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.448914 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.448942 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.448964 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:22Z","lastTransitionTime":"2026-01-09T13:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.552651 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.552716 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.552739 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.552777 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.552800 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:22Z","lastTransitionTime":"2026-01-09T13:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.657280 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.658983 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.659256 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.659415 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.659546 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:22Z","lastTransitionTime":"2026-01-09T13:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.750898 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:22 crc kubenswrapper[4919]: E0109 13:31:22.751113 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.763286 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.763348 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.763376 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.763409 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.763433 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:22Z","lastTransitionTime":"2026-01-09T13:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.867714 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.867765 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.867782 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.867805 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.867824 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:22Z","lastTransitionTime":"2026-01-09T13:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.970438 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.970488 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.970505 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.970531 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:22 crc kubenswrapper[4919]: I0109 13:31:22.970551 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:22Z","lastTransitionTime":"2026-01-09T13:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.073065 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.073118 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.073136 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.073161 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.073277 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:23Z","lastTransitionTime":"2026-01-09T13:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.176294 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.176330 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.176342 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.176359 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.176371 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:23Z","lastTransitionTime":"2026-01-09T13:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.279423 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.279475 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.279492 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.279515 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.279534 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:23Z","lastTransitionTime":"2026-01-09T13:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.383101 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.383148 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.383164 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.383188 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.383205 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:23Z","lastTransitionTime":"2026-01-09T13:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.486616 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.486673 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.486690 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.486714 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.486731 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:23Z","lastTransitionTime":"2026-01-09T13:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.589607 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.589678 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.589692 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.589720 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.589739 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:23Z","lastTransitionTime":"2026-01-09T13:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.693845 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.693890 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.693930 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.693952 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.693965 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:23Z","lastTransitionTime":"2026-01-09T13:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.751329 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.751388 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.751475 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:23 crc kubenswrapper[4919]: E0109 13:31:23.751519 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:23 crc kubenswrapper[4919]: E0109 13:31:23.751679 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:23 crc kubenswrapper[4919]: E0109 13:31:23.751802 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.797540 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.797604 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.797623 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.797654 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.797672 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:23Z","lastTransitionTime":"2026-01-09T13:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.900802 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.900848 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.900862 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.900881 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:23 crc kubenswrapper[4919]: I0109 13:31:23.900895 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:23Z","lastTransitionTime":"2026-01-09T13:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.004281 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.004332 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.004341 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.004357 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.004368 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:24Z","lastTransitionTime":"2026-01-09T13:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.107234 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.107280 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.107292 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.107310 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.107322 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:24Z","lastTransitionTime":"2026-01-09T13:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.211698 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.212110 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.212390 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.212616 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.212800 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:24Z","lastTransitionTime":"2026-01-09T13:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.316950 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.317008 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.317025 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.317050 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.317066 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:24Z","lastTransitionTime":"2026-01-09T13:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.419480 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.419528 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.419541 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.419562 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.419583 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:24Z","lastTransitionTime":"2026-01-09T13:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.523060 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.523576 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.523787 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.523943 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.524097 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:24Z","lastTransitionTime":"2026-01-09T13:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.627177 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.627817 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.628091 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.628314 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.628636 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:24Z","lastTransitionTime":"2026-01-09T13:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.731765 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.731817 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.731833 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.731851 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.731861 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:24Z","lastTransitionTime":"2026-01-09T13:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.751509 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:24 crc kubenswrapper[4919]: E0109 13:31:24.751637 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.835041 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.835127 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.835155 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.835198 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.835263 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:24Z","lastTransitionTime":"2026-01-09T13:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.938983 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.939056 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.939075 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.939104 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:24 crc kubenswrapper[4919]: I0109 13:31:24.939126 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:24Z","lastTransitionTime":"2026-01-09T13:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.041724 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.041773 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.041786 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.041804 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.041820 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:25Z","lastTransitionTime":"2026-01-09T13:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.150281 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.150340 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.150356 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.150380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.150395 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:25Z","lastTransitionTime":"2026-01-09T13:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.252992 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.253058 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.253078 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.253105 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.253124 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:25Z","lastTransitionTime":"2026-01-09T13:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.355621 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.355680 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.355698 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.355724 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.355744 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:25Z","lastTransitionTime":"2026-01-09T13:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.458186 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.458236 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.458249 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.458267 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.458278 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:25Z","lastTransitionTime":"2026-01-09T13:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.561282 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.561326 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.561338 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.561357 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.561368 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:25Z","lastTransitionTime":"2026-01-09T13:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.664265 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.664310 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.664322 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.664340 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.664355 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:25Z","lastTransitionTime":"2026-01-09T13:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.751446 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:25 crc kubenswrapper[4919]: E0109 13:31:25.751615 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.751670 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:25 crc kubenswrapper[4919]: E0109 13:31:25.751911 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.752562 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:25 crc kubenswrapper[4919]: E0109 13:31:25.752965 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.767601 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.767646 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.767659 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.767680 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.767693 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:25Z","lastTransitionTime":"2026-01-09T13:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.871033 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.871101 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.871119 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.871152 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.871175 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:25Z","lastTransitionTime":"2026-01-09T13:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.974463 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.974511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.974523 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.974542 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:25 crc kubenswrapper[4919]: I0109 13:31:25.974553 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:25Z","lastTransitionTime":"2026-01-09T13:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.078122 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.078164 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.078175 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.078194 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.078222 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:26Z","lastTransitionTime":"2026-01-09T13:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.181367 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.181427 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.181442 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.181460 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.181471 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:26Z","lastTransitionTime":"2026-01-09T13:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.284159 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.284195 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.284220 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.284240 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.284251 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:26Z","lastTransitionTime":"2026-01-09T13:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.387457 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.387924 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.388084 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.388259 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.388429 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:26Z","lastTransitionTime":"2026-01-09T13:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.491633 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.491691 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.491701 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.491721 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.491750 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:26Z","lastTransitionTime":"2026-01-09T13:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.594233 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.594740 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.594751 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.594770 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.594782 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:26Z","lastTransitionTime":"2026-01-09T13:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.697849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.697915 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.697934 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.697963 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.697985 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:26Z","lastTransitionTime":"2026-01-09T13:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.751166 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:26 crc kubenswrapper[4919]: E0109 13:31:26.751456 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.800261 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.800317 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.800331 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.800352 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.800369 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:26Z","lastTransitionTime":"2026-01-09T13:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.903850 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.903944 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.903964 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.904001 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:26 crc kubenswrapper[4919]: I0109 13:31:26.904019 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:26Z","lastTransitionTime":"2026-01-09T13:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.006774 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.006828 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.006840 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.006857 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.006868 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.110536 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.110699 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.110721 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.110747 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.110764 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.214447 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.214521 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.214541 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.214572 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.214592 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.317920 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.318037 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.318061 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.318097 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.318122 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.422157 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.422238 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.422259 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.422282 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.422299 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.525112 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.525160 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.525177 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.525199 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.525232 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.627775 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.627815 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.627829 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.627849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.627866 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.730559 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.730600 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.730610 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.730627 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.730637 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.750996 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.751041 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.750996 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:27 crc kubenswrapper[4919]: E0109 13:31:27.751537 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:27 crc kubenswrapper[4919]: E0109 13:31:27.751644 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:27 crc kubenswrapper[4919]: E0109 13:31:27.751812 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.833565 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.833614 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.833625 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.833644 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.833655 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.942333 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.942431 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.942460 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.942503 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.942543 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.983522 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.983579 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.983592 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.983614 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:27 crc kubenswrapper[4919]: I0109 13:31:27.983629 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:27Z","lastTransitionTime":"2026-01-09T13:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:27 crc kubenswrapper[4919]: E0109 13:31:27.998777 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:27Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.003462 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.003522 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.003533 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.003550 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.003561 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: E0109 13:31:28.025525 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:28Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.030053 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.030250 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.030375 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.030500 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.030594 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: E0109 13:31:28.047641 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:28Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.053374 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.053412 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.053426 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.053447 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.053463 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: E0109 13:31:28.068203 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:28Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.073059 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.073197 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.073285 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.073359 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.073417 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: E0109 13:31:28.088181 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:28Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:28 crc kubenswrapper[4919]: E0109 13:31:28.088426 4919 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.090039 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.090342 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.090421 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.090486 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.090542 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.194565 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.194615 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.194628 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.194650 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.194667 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.298657 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.299312 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.299349 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.299380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.299402 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.402598 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.402649 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.402664 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.402689 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.402705 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.505001 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.505040 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.505049 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.505064 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.505074 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.608953 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.609000 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.609011 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.609032 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.609045 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.711954 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.712015 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.712029 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.712058 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.712075 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.750881 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:28 crc kubenswrapper[4919]: E0109 13:31:28.751230 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.815409 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.815458 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.815469 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.815484 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.815496 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.918031 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.918106 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.918132 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.918168 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:28 crc kubenswrapper[4919]: I0109 13:31:28.918196 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:28Z","lastTransitionTime":"2026-01-09T13:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.021684 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.021730 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.021743 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.021762 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.021776 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:29Z","lastTransitionTime":"2026-01-09T13:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.124809 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.124854 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.124872 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.124893 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.124910 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:29Z","lastTransitionTime":"2026-01-09T13:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.228245 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.228297 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.228307 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.228326 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.228338 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:29Z","lastTransitionTime":"2026-01-09T13:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.330683 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.330716 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.330727 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.330741 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.330773 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:29Z","lastTransitionTime":"2026-01-09T13:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.434307 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.434721 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.434793 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.434867 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.434943 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:29Z","lastTransitionTime":"2026-01-09T13:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.537515 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.537577 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.537596 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.537630 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.537652 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:29Z","lastTransitionTime":"2026-01-09T13:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.640744 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.640835 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.640855 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.640887 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.640907 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:29Z","lastTransitionTime":"2026-01-09T13:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.744268 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.744321 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.744331 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.744352 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.744363 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:29Z","lastTransitionTime":"2026-01-09T13:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.754891 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.754800 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.756328 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:29 crc kubenswrapper[4919]: E0109 13:31:29.756737 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:29 crc kubenswrapper[4919]: E0109 13:31:29.756891 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:29 crc kubenswrapper[4919]: E0109 13:31:29.756952 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.757555 4919 scope.go:117] "RemoveContainer" containerID="91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae" Jan 09 13:31:29 crc kubenswrapper[4919]: E0109 13:31:29.757825 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.847256 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.847328 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.847348 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.847380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.847399 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:29Z","lastTransitionTime":"2026-01-09T13:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.951083 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.951155 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.951179 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.951236 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:29 crc kubenswrapper[4919]: I0109 13:31:29.951259 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:29Z","lastTransitionTime":"2026-01-09T13:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.054139 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.054185 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.054197 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.054234 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.054247 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:30Z","lastTransitionTime":"2026-01-09T13:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.157821 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.158328 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.158509 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.158654 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.158773 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:30Z","lastTransitionTime":"2026-01-09T13:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.261790 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.261860 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.261881 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.261910 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.261931 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:30Z","lastTransitionTime":"2026-01-09T13:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.365180 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.365288 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.365345 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.365377 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.365400 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:30Z","lastTransitionTime":"2026-01-09T13:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.468763 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.468893 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.468917 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.468944 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.469002 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:30Z","lastTransitionTime":"2026-01-09T13:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.572287 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.572347 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.572367 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.572393 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.572412 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:30Z","lastTransitionTime":"2026-01-09T13:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.675407 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.675767 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.675978 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.676185 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.676418 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:30Z","lastTransitionTime":"2026-01-09T13:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.751477 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:30 crc kubenswrapper[4919]: E0109 13:31:30.751799 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.775868 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.781664 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.781722 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.781741 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.781768 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.781788 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:30Z","lastTransitionTime":"2026-01-09T13:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.795495 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.812362 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.829683 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.849666 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.867639 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.884470 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.885302 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.885337 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.885349 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.885373 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.885384 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:30Z","lastTransitionTime":"2026-01-09T13:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.905057 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.922089 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.945896 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.958440 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.970402 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.981194 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.988029 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.988068 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.988079 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.988098 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.988108 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:30Z","lastTransitionTime":"2026-01-09T13:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:30 crc kubenswrapper[4919]: I0109 13:31:30.992878 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:30Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.007997 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:31Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.019535 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:31Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.034741 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:31Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.090404 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.090501 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.090534 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.090572 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.090599 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:31Z","lastTransitionTime":"2026-01-09T13:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.194605 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.194672 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.194692 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.194723 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.194744 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:31Z","lastTransitionTime":"2026-01-09T13:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.298168 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.298548 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.298624 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.298710 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.298816 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:31Z","lastTransitionTime":"2026-01-09T13:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.402590 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.402640 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.402652 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.402670 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.402682 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:31Z","lastTransitionTime":"2026-01-09T13:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.506499 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.506539 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.506549 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.506567 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.506578 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:31Z","lastTransitionTime":"2026-01-09T13:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.609842 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.609937 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.609950 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.609970 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.609984 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:31Z","lastTransitionTime":"2026-01-09T13:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.713879 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.714448 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.714651 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.714812 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.714950 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:31Z","lastTransitionTime":"2026-01-09T13:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.751307 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.751601 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.751706 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:31 crc kubenswrapper[4919]: E0109 13:31:31.752297 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:31 crc kubenswrapper[4919]: E0109 13:31:31.752440 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:31 crc kubenswrapper[4919]: E0109 13:31:31.752587 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.818766 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.818829 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.818842 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.818860 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.818872 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:31Z","lastTransitionTime":"2026-01-09T13:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.921757 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.922498 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.922581 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.922700 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:31 crc kubenswrapper[4919]: I0109 13:31:31.922806 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:31Z","lastTransitionTime":"2026-01-09T13:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.025573 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.025626 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.025638 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.025657 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.025672 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:32Z","lastTransitionTime":"2026-01-09T13:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.046785 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:32 crc kubenswrapper[4919]: E0109 13:31:32.047036 4919 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:32 crc kubenswrapper[4919]: E0109 13:31:32.047147 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs podName:7a2e9878-6b0e-4328-a3ca-9f828fb105c9 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:04.047123215 +0000 UTC m=+103.594962665 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs") pod "network-metrics-daemon-xkhdz" (UID: "7a2e9878-6b0e-4328-a3ca-9f828fb105c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.128627 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.128664 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.128676 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.128700 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.128709 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:32Z","lastTransitionTime":"2026-01-09T13:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.232051 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.232118 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.232138 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.232166 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.232185 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:32Z","lastTransitionTime":"2026-01-09T13:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.261087 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgw8v_11e19b4a-0888-460f-bf97-5dd0ddda6e8c/kube-multus/0.log" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.261151 4919 generic.go:334] "Generic (PLEG): container finished" podID="11e19b4a-0888-460f-bf97-5dd0ddda6e8c" containerID="3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6" exitCode=1 Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.261198 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgw8v" event={"ID":"11e19b4a-0888-460f-bf97-5dd0ddda6e8c","Type":"ContainerDied","Data":"3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.261727 4919 scope.go:117] "RemoveContainer" containerID="3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.279846 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.298613 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.308845 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.330353 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.335310 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.335378 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.335391 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.335415 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.335428 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:32Z","lastTransitionTime":"2026-01-09T13:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.342461 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.364930 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.381360 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.396345 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.410197 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.427298 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"2026-01-09T13:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17\\\\n2026-01-09T13:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17 to /host/opt/cni/bin/\\\\n2026-01-09T13:30:47Z [verbose] multus-daemon started\\\\n2026-01-09T13:30:47Z [verbose] Readiness Indicator file check\\\\n2026-01-09T13:31:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.438021 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.438090 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.438108 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.438134 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.438153 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:32Z","lastTransitionTime":"2026-01-09T13:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.446074 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.466707 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.512020 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.532496 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.540632 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.540969 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.541060 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.541191 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.541307 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:32Z","lastTransitionTime":"2026-01-09T13:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.568304 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.581063 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.600989 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:32Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.644760 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.645044 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.645105 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.645179 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.645255 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:32Z","lastTransitionTime":"2026-01-09T13:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.748054 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.748097 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.748107 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.748131 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.748148 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:32Z","lastTransitionTime":"2026-01-09T13:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.751519 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:32 crc kubenswrapper[4919]: E0109 13:31:32.752175 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.851896 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.851957 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.851977 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.852005 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.852024 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:32Z","lastTransitionTime":"2026-01-09T13:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.955382 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.955436 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.955446 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.955537 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:32 crc kubenswrapper[4919]: I0109 13:31:32.955558 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:32Z","lastTransitionTime":"2026-01-09T13:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.059498 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.059573 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.059773 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.059805 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.059840 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:33Z","lastTransitionTime":"2026-01-09T13:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.181074 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.181117 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.181130 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.181149 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.181174 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:33Z","lastTransitionTime":"2026-01-09T13:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.267238 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgw8v_11e19b4a-0888-460f-bf97-5dd0ddda6e8c/kube-multus/0.log" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.267309 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgw8v" event={"ID":"11e19b4a-0888-460f-bf97-5dd0ddda6e8c","Type":"ContainerStarted","Data":"6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a"} Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.283278 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.283313 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.283327 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.283346 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.283360 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:33Z","lastTransitionTime":"2026-01-09T13:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.292357 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.310862 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.327804 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.351133 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.368048 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.386978 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.387020 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.387033 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.387053 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.387066 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:33Z","lastTransitionTime":"2026-01-09T13:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.393376 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.409783 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.426784 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.444124 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.463117 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"2026-01-09T13:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17\\\\n2026-01-09T13:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17 to /host/opt/cni/bin/\\\\n2026-01-09T13:30:47Z [verbose] multus-daemon started\\\\n2026-01-09T13:30:47Z [verbose] Readiness Indicator file check\\\\n2026-01-09T13:31:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.483341 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.490455 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.490662 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.490849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.490988 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.491121 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:33Z","lastTransitionTime":"2026-01-09T13:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.501017 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.518518 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.538131 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.560146 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.572033 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.589243 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:33Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.593509 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.593581 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.593596 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.593644 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.593660 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:33Z","lastTransitionTime":"2026-01-09T13:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.697469 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.697543 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.697568 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.697605 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.697629 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:33Z","lastTransitionTime":"2026-01-09T13:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.751296 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.751410 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:33 crc kubenswrapper[4919]: E0109 13:31:33.751526 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:33 crc kubenswrapper[4919]: E0109 13:31:33.751695 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.751723 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:33 crc kubenswrapper[4919]: E0109 13:31:33.752037 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.800786 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.800863 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.800891 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.800941 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.800968 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:33Z","lastTransitionTime":"2026-01-09T13:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.904664 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.904732 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.904751 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.904779 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:33 crc kubenswrapper[4919]: I0109 13:31:33.904797 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:33Z","lastTransitionTime":"2026-01-09T13:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.007247 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.007315 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.007333 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.007363 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.007382 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:34Z","lastTransitionTime":"2026-01-09T13:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.110767 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.110833 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.110847 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.110869 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.110892 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:34Z","lastTransitionTime":"2026-01-09T13:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.214026 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.214091 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.214105 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.214132 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.214147 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:34Z","lastTransitionTime":"2026-01-09T13:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.317997 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.318063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.318082 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.318112 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.318132 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:34Z","lastTransitionTime":"2026-01-09T13:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.422074 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.422165 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.422188 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.422252 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.422278 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:34Z","lastTransitionTime":"2026-01-09T13:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.526695 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.526775 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.526801 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.526838 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.526864 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:34Z","lastTransitionTime":"2026-01-09T13:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.633483 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.633612 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.633640 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.633673 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.633704 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:34Z","lastTransitionTime":"2026-01-09T13:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.739456 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.739521 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.739540 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.739572 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.739594 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:34Z","lastTransitionTime":"2026-01-09T13:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.751470 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:34 crc kubenswrapper[4919]: E0109 13:31:34.752833 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.842958 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.843030 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.843063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.843116 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.843145 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:34Z","lastTransitionTime":"2026-01-09T13:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.947093 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.947237 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.947254 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.947281 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:34 crc kubenswrapper[4919]: I0109 13:31:34.947299 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:34Z","lastTransitionTime":"2026-01-09T13:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.051096 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.051150 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.051168 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.051196 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.051238 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:35Z","lastTransitionTime":"2026-01-09T13:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.155317 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.155376 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.155393 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.155421 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.155440 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:35Z","lastTransitionTime":"2026-01-09T13:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.258100 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.258140 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.258157 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.258180 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.258197 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:35Z","lastTransitionTime":"2026-01-09T13:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.361366 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.361418 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.361435 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.361459 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.361477 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:35Z","lastTransitionTime":"2026-01-09T13:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.465304 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.465378 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.465396 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.465425 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.465450 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:35Z","lastTransitionTime":"2026-01-09T13:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.568759 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.568833 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.568855 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.568886 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.568905 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:35Z","lastTransitionTime":"2026-01-09T13:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.671816 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.671879 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.671896 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.671925 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.671946 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:35Z","lastTransitionTime":"2026-01-09T13:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.750819 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.750905 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.751034 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:35 crc kubenswrapper[4919]: E0109 13:31:35.751167 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:35 crc kubenswrapper[4919]: E0109 13:31:35.751361 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:35 crc kubenswrapper[4919]: E0109 13:31:35.751635 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.774924 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.774978 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.774994 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.775022 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.775039 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:35Z","lastTransitionTime":"2026-01-09T13:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.878869 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.878940 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.878959 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.878988 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.879010 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:35Z","lastTransitionTime":"2026-01-09T13:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.982886 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.982946 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.982965 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.982987 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:35 crc kubenswrapper[4919]: I0109 13:31:35.983008 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:35Z","lastTransitionTime":"2026-01-09T13:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.086817 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.086883 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.086901 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.086927 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.086945 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:36Z","lastTransitionTime":"2026-01-09T13:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.189668 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.189738 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.189758 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.189788 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.189807 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:36Z","lastTransitionTime":"2026-01-09T13:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.293384 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.293467 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.293491 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.293886 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.294113 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:36Z","lastTransitionTime":"2026-01-09T13:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.397744 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.397812 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.397830 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.397860 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.397878 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:36Z","lastTransitionTime":"2026-01-09T13:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.501389 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.501480 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.501504 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.501535 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.501559 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:36Z","lastTransitionTime":"2026-01-09T13:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.604519 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.604600 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.604622 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.604653 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.604676 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:36Z","lastTransitionTime":"2026-01-09T13:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.708094 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.708176 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.708250 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.708289 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.708311 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:36Z","lastTransitionTime":"2026-01-09T13:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.751056 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:36 crc kubenswrapper[4919]: E0109 13:31:36.751302 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.812969 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.813031 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.813049 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.813077 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.813095 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:36Z","lastTransitionTime":"2026-01-09T13:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.916233 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.916322 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.916340 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.916377 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:36 crc kubenswrapper[4919]: I0109 13:31:36.916396 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:36Z","lastTransitionTime":"2026-01-09T13:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.019113 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.019175 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.019193 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.019253 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.019275 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:37Z","lastTransitionTime":"2026-01-09T13:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.122856 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.122919 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.122937 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.122962 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.122984 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:37Z","lastTransitionTime":"2026-01-09T13:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.226772 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.226839 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.226856 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.226886 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.226905 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:37Z","lastTransitionTime":"2026-01-09T13:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.330700 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.330826 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.330850 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.330877 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.330895 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:37Z","lastTransitionTime":"2026-01-09T13:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.434119 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.434186 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.434241 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.434277 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.434302 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:37Z","lastTransitionTime":"2026-01-09T13:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.537677 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.537746 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.537767 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.537793 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.537811 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:37Z","lastTransitionTime":"2026-01-09T13:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.640566 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.640680 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.640700 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.640730 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.640751 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:37Z","lastTransitionTime":"2026-01-09T13:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.744062 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.744129 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.744148 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.744177 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.744198 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:37Z","lastTransitionTime":"2026-01-09T13:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.750941 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.751000 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.751043 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:37 crc kubenswrapper[4919]: E0109 13:31:37.751140 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:37 crc kubenswrapper[4919]: E0109 13:31:37.751311 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:37 crc kubenswrapper[4919]: E0109 13:31:37.751537 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.847901 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.847967 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.847984 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.848012 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.848031 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:37Z","lastTransitionTime":"2026-01-09T13:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.950731 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.950799 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.950813 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.950835 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:37 crc kubenswrapper[4919]: I0109 13:31:37.950847 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:37Z","lastTransitionTime":"2026-01-09T13:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.053717 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.053972 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.054173 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.054426 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.054625 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.157951 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.158019 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.158044 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.158073 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.158092 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.247871 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.248230 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.248318 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.248428 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.248519 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: E0109 13:31:38.269433 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:38Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.274520 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.274587 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.274615 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.274649 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.274675 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: E0109 13:31:38.296912 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:38Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.300705 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.300735 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.300745 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.300762 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.300773 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: E0109 13:31:38.316351 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:38Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.321402 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.321450 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.321469 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.321492 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.321513 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: E0109 13:31:38.337034 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:38Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.340808 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.340849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.340862 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.340877 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.340892 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: E0109 13:31:38.356804 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:38Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:38 crc kubenswrapper[4919]: E0109 13:31:38.356989 4919 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.358415 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.358481 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.358500 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.358522 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.358540 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.461253 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.461292 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.461305 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.461325 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.461338 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.564732 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.564814 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.564838 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.564865 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.564885 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.668304 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.668364 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.668384 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.668412 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.668432 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.751356 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:38 crc kubenswrapper[4919]: E0109 13:31:38.751568 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.771485 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.771540 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.771557 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.771582 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.771603 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.874840 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.874908 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.874926 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.874952 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.874973 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.978238 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.978319 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.978340 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.978374 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:38 crc kubenswrapper[4919]: I0109 13:31:38.978398 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:38Z","lastTransitionTime":"2026-01-09T13:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.081124 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.081198 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.081243 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.081272 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.081328 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:39Z","lastTransitionTime":"2026-01-09T13:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.184676 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.184742 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.184759 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.184795 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.184821 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:39Z","lastTransitionTime":"2026-01-09T13:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.287848 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.287930 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.287946 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.287967 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.287984 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:39Z","lastTransitionTime":"2026-01-09T13:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.392067 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.392145 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.392163 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.392191 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.392249 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:39Z","lastTransitionTime":"2026-01-09T13:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.496154 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.496237 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.496254 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.496281 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.496298 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:39Z","lastTransitionTime":"2026-01-09T13:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.599933 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.600003 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.600017 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.600044 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.600060 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:39Z","lastTransitionTime":"2026-01-09T13:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.703622 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.703698 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.703717 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.703742 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.703760 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:39Z","lastTransitionTime":"2026-01-09T13:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.751682 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.751898 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:39 crc kubenswrapper[4919]: E0109 13:31:39.752065 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.752155 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:39 crc kubenswrapper[4919]: E0109 13:31:39.752296 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:39 crc kubenswrapper[4919]: E0109 13:31:39.752467 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.806448 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.806506 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.806524 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.806550 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.806570 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:39Z","lastTransitionTime":"2026-01-09T13:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.910145 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.910201 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.910232 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.910248 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:39 crc kubenswrapper[4919]: I0109 13:31:39.910258 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:39Z","lastTransitionTime":"2026-01-09T13:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.013009 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.013082 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.013094 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.013109 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.013119 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:40Z","lastTransitionTime":"2026-01-09T13:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.115706 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.115754 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.115768 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.115795 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.115809 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:40Z","lastTransitionTime":"2026-01-09T13:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.220353 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.220414 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.220428 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.220449 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.220463 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:40Z","lastTransitionTime":"2026-01-09T13:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.323467 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.323522 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.323537 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.323558 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.323571 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:40Z","lastTransitionTime":"2026-01-09T13:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.427258 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.427532 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.427575 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.427614 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.427638 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:40Z","lastTransitionTime":"2026-01-09T13:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.532197 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.532292 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.532311 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.532338 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.532357 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:40Z","lastTransitionTime":"2026-01-09T13:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.636449 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.636535 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.636555 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.636581 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.636667 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:40Z","lastTransitionTime":"2026-01-09T13:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.740013 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.740106 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.740127 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.740158 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.740180 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:40Z","lastTransitionTime":"2026-01-09T13:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.751717 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.752469 4919 scope.go:117] "RemoveContainer" containerID="91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae" Jan 09 13:31:40 crc kubenswrapper[4919]: E0109 13:31:40.752971 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.777427 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:40Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.828857 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:40Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.842658 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.842686 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.842695 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.842711 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.842721 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:40Z","lastTransitionTime":"2026-01-09T13:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.850460 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:40Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.872012 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:40Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.885555 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:40Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.901634 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:40Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.922861 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:40Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.942256 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:40Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.945402 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.945438 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.945448 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.945463 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.945477 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:40Z","lastTransitionTime":"2026-01-09T13:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.963618 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:40Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:40 crc kubenswrapper[4919]: I0109 13:31:40.989400 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:40Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.013480 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.038088 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.048961 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.049028 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.049047 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.049078 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.049099 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:41Z","lastTransitionTime":"2026-01-09T13:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.058633 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.079843 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.094571 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.117520 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"2026-01-09T13:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17\\\\n2026-01-09T13:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17 to /host/opt/cni/bin/\\\\n2026-01-09T13:30:47Z [verbose] multus-daemon started\\\\n2026-01-09T13:30:47Z [verbose] Readiness Indicator file check\\\\n2026-01-09T13:31:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.137447 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.152392 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.152451 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.152465 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.152489 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.152504 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:41Z","lastTransitionTime":"2026-01-09T13:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.255667 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.255731 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.255749 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.255776 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.255796 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:41Z","lastTransitionTime":"2026-01-09T13:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.359975 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.360511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.360566 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.360675 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.360703 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:41Z","lastTransitionTime":"2026-01-09T13:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.441348 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/2.log" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.446591 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9"} Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.447468 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.465063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.465173 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.465262 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.465325 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.465354 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:41Z","lastTransitionTime":"2026-01-09T13:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.474820 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.500628 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.524849 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.543938 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.568644 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.568726 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.568751 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.568788 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.568822 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:41Z","lastTransitionTime":"2026-01-09T13:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.568758 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.590699 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.606570 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.624408 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.642104 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.664009 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"2026-01-09T13:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17\\\\n2026-01-09T13:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17 to /host/opt/cni/bin/\\\\n2026-01-09T13:30:47Z [verbose] multus-daemon started\\\\n2026-01-09T13:30:47Z [verbose] Readiness Indicator file check\\\\n2026-01-09T13:31:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.671727 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.671803 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.671823 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.671857 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.671877 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:41Z","lastTransitionTime":"2026-01-09T13:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.679862 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.701680 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.715311 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.728504 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.750685 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.750723 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.750685 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:41 crc kubenswrapper[4919]: E0109 13:31:41.751045 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:41 crc kubenswrapper[4919]: E0109 13:31:41.751187 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:41 crc kubenswrapper[4919]: E0109 13:31:41.751320 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.762233 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.773482 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.774436 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.774526 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.774574 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.774614 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.774647 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:41Z","lastTransitionTime":"2026-01-09T13:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.783492 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.799775 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:41Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.877581 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.877619 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.877629 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.877643 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.877652 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:41Z","lastTransitionTime":"2026-01-09T13:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.980550 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.980619 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.980632 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.980650 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:41 crc kubenswrapper[4919]: I0109 13:31:41.980703 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:41Z","lastTransitionTime":"2026-01-09T13:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.083584 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.083642 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.083658 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.083682 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.083700 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:42Z","lastTransitionTime":"2026-01-09T13:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.187083 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.187144 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.187160 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.187183 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.187200 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:42Z","lastTransitionTime":"2026-01-09T13:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.290021 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.290105 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.290129 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.290153 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.290197 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:42Z","lastTransitionTime":"2026-01-09T13:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.393871 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.393964 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.393985 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.394016 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.394067 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:42Z","lastTransitionTime":"2026-01-09T13:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.497187 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.497280 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.497304 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.497332 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.497350 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:42Z","lastTransitionTime":"2026-01-09T13:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.600744 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.600803 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.600813 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.600830 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.600866 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:42Z","lastTransitionTime":"2026-01-09T13:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.703234 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.703277 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.703286 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.703303 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.703314 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:42Z","lastTransitionTime":"2026-01-09T13:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.751666 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:42 crc kubenswrapper[4919]: E0109 13:31:42.751864 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.806944 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.806983 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.806993 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.807009 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.807022 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:42Z","lastTransitionTime":"2026-01-09T13:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.909693 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.909743 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.909760 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.909785 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:42 crc kubenswrapper[4919]: I0109 13:31:42.909802 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:42Z","lastTransitionTime":"2026-01-09T13:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.013391 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.013453 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.013471 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.013495 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.013514 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:43Z","lastTransitionTime":"2026-01-09T13:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.116764 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.116813 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.116830 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.116853 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.116872 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:43Z","lastTransitionTime":"2026-01-09T13:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.219928 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.220062 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.220088 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.220114 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.220133 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:43Z","lastTransitionTime":"2026-01-09T13:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.323427 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.323515 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.323534 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.323566 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.323587 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:43Z","lastTransitionTime":"2026-01-09T13:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.426864 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.426936 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.426957 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.426989 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.427009 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:43Z","lastTransitionTime":"2026-01-09T13:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.457421 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/3.log" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.458429 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/2.log" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.461772 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" exitCode=1 Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.461827 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.461890 4919 scope.go:117] "RemoveContainer" containerID="91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.463114 4919 scope.go:117] "RemoveContainer" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:31:43 crc kubenswrapper[4919]: E0109 13:31:43.463477 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.479956 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.502724 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.523994 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.530179 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.530265 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.530290 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.530323 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.530344 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:43Z","lastTransitionTime":"2026-01-09T13:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.541084 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.566134 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.585284 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.607509 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d79db8-b1e1-43cb-b39f-aea72914778d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d03488cb3bf92b2cf5ae2daac3b83d4925c14e6bbf4789a0ed00e4caf275a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.633284 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.633355 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.633377 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.633408 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.633431 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:43Z","lastTransitionTime":"2026-01-09T13:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.636123 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.656499 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.680973 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.703974 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.725938 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"2026-01-09T13:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17\\\\n2026-01-09T13:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17 to /host/opt/cni/bin/\\\\n2026-01-09T13:30:47Z [verbose] multus-daemon started\\\\n2026-01-09T13:30:47Z [verbose] Readiness Indicator file check\\\\n2026-01-09T13:31:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.736577 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.736633 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.736645 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.736671 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.736703 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:43Z","lastTransitionTime":"2026-01-09T13:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.746303 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.751508 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.751516 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.751616 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:43 crc kubenswrapper[4919]: E0109 13:31:43.751739 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:43 crc kubenswrapper[4919]: E0109 13:31:43.751830 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:43 crc kubenswrapper[4919]: E0109 13:31:43.751924 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.765789 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.784047 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.805693 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.834985 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:42Z\\\",\\\"message\\\":\\\"ort:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0109 13:31:42.322963 7017 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:42Z is after 2025-08-24T17:21:41Z]\\\\nI0109 13:31:42.323058 7017 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0109 13:31:42.323036 7017 model_client.go:382] Up\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.839616 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.839676 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.839686 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.839744 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.839757 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:43Z","lastTransitionTime":"2026-01-09T13:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.855036 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:43Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.943472 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.943548 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.943602 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.943636 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:43 crc kubenswrapper[4919]: I0109 13:31:43.943660 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:43Z","lastTransitionTime":"2026-01-09T13:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.047680 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.047749 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.047768 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.047796 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.047816 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:44Z","lastTransitionTime":"2026-01-09T13:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.151577 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.151641 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.151692 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.151721 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.151740 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:44Z","lastTransitionTime":"2026-01-09T13:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.254816 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.254877 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.254898 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.254928 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.254951 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:44Z","lastTransitionTime":"2026-01-09T13:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.360443 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.361129 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.361153 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.361178 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.361192 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:44Z","lastTransitionTime":"2026-01-09T13:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.464075 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.464115 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.464126 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.464143 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.464156 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:44Z","lastTransitionTime":"2026-01-09T13:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.467090 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/3.log" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.567807 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.567909 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.567930 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.567960 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.567982 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:44Z","lastTransitionTime":"2026-01-09T13:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.671870 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.671954 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.671973 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.672000 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.672023 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:44Z","lastTransitionTime":"2026-01-09T13:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.751409 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:44 crc kubenswrapper[4919]: E0109 13:31:44.751619 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.774169 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.774235 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.774251 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.774271 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.774284 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:44Z","lastTransitionTime":"2026-01-09T13:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.877089 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.877131 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.877141 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.877156 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.877166 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:44Z","lastTransitionTime":"2026-01-09T13:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.979485 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.979517 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.979528 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.979542 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:44 crc kubenswrapper[4919]: I0109 13:31:44.979551 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:44Z","lastTransitionTime":"2026-01-09T13:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.082985 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.083064 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.083092 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.083122 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.083147 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:45Z","lastTransitionTime":"2026-01-09T13:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.186462 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.186512 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.186522 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.186540 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.186550 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:45Z","lastTransitionTime":"2026-01-09T13:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.290063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.290109 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.290118 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.290135 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.290146 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:45Z","lastTransitionTime":"2026-01-09T13:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.393569 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.393636 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.393654 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.393684 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.393704 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:45Z","lastTransitionTime":"2026-01-09T13:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.496555 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.496636 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.496656 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.496689 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.496709 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:45Z","lastTransitionTime":"2026-01-09T13:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.600816 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.600878 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.600902 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.600933 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.600953 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:45Z","lastTransitionTime":"2026-01-09T13:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.703876 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.703939 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.703960 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.703991 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.704010 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:45Z","lastTransitionTime":"2026-01-09T13:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.751403 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.751643 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.751410 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:45 crc kubenswrapper[4919]: E0109 13:31:45.751966 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:45 crc kubenswrapper[4919]: E0109 13:31:45.752138 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:45 crc kubenswrapper[4919]: E0109 13:31:45.752350 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.807104 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.807165 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.807192 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.807253 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.807268 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:45Z","lastTransitionTime":"2026-01-09T13:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.910775 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.910858 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.910871 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.910889 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:45 crc kubenswrapper[4919]: I0109 13:31:45.910902 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:45Z","lastTransitionTime":"2026-01-09T13:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.013640 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.013703 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.013722 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.013749 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.013769 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:46Z","lastTransitionTime":"2026-01-09T13:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.117421 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.117482 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.117500 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.117528 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.117552 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:46Z","lastTransitionTime":"2026-01-09T13:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.221326 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.221421 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.221442 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.221474 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.221492 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:46Z","lastTransitionTime":"2026-01-09T13:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.324900 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.324972 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.324995 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.325028 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.325050 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:46Z","lastTransitionTime":"2026-01-09T13:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.428353 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.428417 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.428434 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.428459 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.428477 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:46Z","lastTransitionTime":"2026-01-09T13:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.531511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.531596 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.531617 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.531650 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.531671 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:46Z","lastTransitionTime":"2026-01-09T13:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.635587 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.635639 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.635656 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.635683 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.635704 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:46Z","lastTransitionTime":"2026-01-09T13:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.739407 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.739492 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.739506 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.739540 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.739565 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:46Z","lastTransitionTime":"2026-01-09T13:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.750827 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:46 crc kubenswrapper[4919]: E0109 13:31:46.750996 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.842491 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.842522 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.842532 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.842550 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.842561 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:46Z","lastTransitionTime":"2026-01-09T13:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.946347 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.946445 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.946465 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.946494 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:46 crc kubenswrapper[4919]: I0109 13:31:46.946515 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:46Z","lastTransitionTime":"2026-01-09T13:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.049302 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.049380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.049400 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.049429 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.049450 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:47Z","lastTransitionTime":"2026-01-09T13:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.152596 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.152667 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.152687 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.152719 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.152738 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:47Z","lastTransitionTime":"2026-01-09T13:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.256467 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.256561 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.256585 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.256621 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.256645 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:47Z","lastTransitionTime":"2026-01-09T13:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.359705 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.359753 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.359765 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.359784 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.359797 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:47Z","lastTransitionTime":"2026-01-09T13:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.462439 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.462519 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.462538 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.462565 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.462585 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:47Z","lastTransitionTime":"2026-01-09T13:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.566939 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.567012 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.567032 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.567062 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.567086 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:47Z","lastTransitionTime":"2026-01-09T13:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.670852 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.670944 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.670971 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.671007 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.671033 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:47Z","lastTransitionTime":"2026-01-09T13:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.750989 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.751046 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.751055 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:47 crc kubenswrapper[4919]: E0109 13:31:47.751188 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:47 crc kubenswrapper[4919]: E0109 13:31:47.751400 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:47 crc kubenswrapper[4919]: E0109 13:31:47.751619 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.774473 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.774526 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.774535 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.774553 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.774567 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:47Z","lastTransitionTime":"2026-01-09T13:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.877714 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.877772 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.877794 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.877819 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.877838 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:47Z","lastTransitionTime":"2026-01-09T13:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.980922 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.980967 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.980987 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.981010 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:47 crc kubenswrapper[4919]: I0109 13:31:47.981027 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:47Z","lastTransitionTime":"2026-01-09T13:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.084120 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.084197 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.084258 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.084296 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.084320 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.187340 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.187416 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.187442 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.187476 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.187500 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.290421 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.290527 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.290552 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.290636 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.290666 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.394633 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.394706 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.394731 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.394763 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.394793 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.497779 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.497825 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.497842 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.497860 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.497873 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.567308 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.567447 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.567603 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:52.567551011 +0000 UTC m=+152.115390471 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.567648 4919 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.567738 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.567750 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:52.567724765 +0000 UTC m=+152.115564245 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.568018 4919 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.568134 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:52.568111935 +0000 UTC m=+152.115951425 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.600625 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.600691 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.600712 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.600740 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.600762 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.616419 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.616500 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.616523 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.616551 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.616571 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.636014 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.644431 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.644516 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.644546 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.644577 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.644607 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.668196 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.668477 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.668615 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.668864 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.668909 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.668936 4919 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.668941 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.669004 4919 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.669024 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:52.66899085 +0000 UTC m=+152.216830340 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.669037 4919 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.669175 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:52.669130934 +0000 UTC m=+152.216970424 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.673740 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.673831 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.673851 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.673878 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.673898 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.695891 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.702452 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.702533 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.702559 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.702593 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.702620 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.726651 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.733431 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.733505 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.733525 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.733567 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.733590 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.751432 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.751646 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.754052 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:48Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:48 crc kubenswrapper[4919]: E0109 13:31:48.754304 4919 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.756825 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.756918 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.756941 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.756967 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.756985 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.861536 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.862109 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.862131 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.862163 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.862186 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.964807 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.964871 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.964893 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.964922 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:48 crc kubenswrapper[4919]: I0109 13:31:48.964941 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:48Z","lastTransitionTime":"2026-01-09T13:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.068674 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.068734 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.068745 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.068768 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.068791 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:49Z","lastTransitionTime":"2026-01-09T13:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.173159 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.173284 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.173313 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.173350 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.173377 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:49Z","lastTransitionTime":"2026-01-09T13:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.276768 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.276860 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.276879 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.276912 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.276933 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:49Z","lastTransitionTime":"2026-01-09T13:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.381697 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.381774 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.381791 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.381819 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.381840 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:49Z","lastTransitionTime":"2026-01-09T13:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.485861 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.485934 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.485953 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.485984 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.486009 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:49Z","lastTransitionTime":"2026-01-09T13:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.589663 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.589727 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.589746 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.589770 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.589788 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:49Z","lastTransitionTime":"2026-01-09T13:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.694579 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.694679 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.694704 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.694733 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.694751 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:49Z","lastTransitionTime":"2026-01-09T13:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.751411 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.751504 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:49 crc kubenswrapper[4919]: E0109 13:31:49.751597 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:49 crc kubenswrapper[4919]: E0109 13:31:49.751725 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.751837 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:49 crc kubenswrapper[4919]: E0109 13:31:49.752493 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.773274 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.798168 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.798288 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.798321 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.798505 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.798535 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:49Z","lastTransitionTime":"2026-01-09T13:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.901695 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.901800 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.901826 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.901856 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:49 crc kubenswrapper[4919]: I0109 13:31:49.901884 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:49Z","lastTransitionTime":"2026-01-09T13:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.004878 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.004926 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.004935 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.004952 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.004975 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:50Z","lastTransitionTime":"2026-01-09T13:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.108367 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.108448 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.108481 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.108511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.108528 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:50Z","lastTransitionTime":"2026-01-09T13:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.214431 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.214500 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.214520 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.214579 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.214599 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:50Z","lastTransitionTime":"2026-01-09T13:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.317905 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.317967 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.317985 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.318017 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.318036 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:50Z","lastTransitionTime":"2026-01-09T13:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.420823 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.420868 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.420882 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.420898 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.420909 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:50Z","lastTransitionTime":"2026-01-09T13:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.524398 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.524431 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.524443 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.524457 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.524470 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:50Z","lastTransitionTime":"2026-01-09T13:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.627715 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.627810 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.627829 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.627890 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.627909 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:50Z","lastTransitionTime":"2026-01-09T13:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.731708 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.731818 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.731854 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.731888 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.731917 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:50Z","lastTransitionTime":"2026-01-09T13:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.750992 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:50 crc kubenswrapper[4919]: E0109 13:31:50.751302 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.770733 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"2026-01-09T13:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17\\\\n2026-01-09T13:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17 to /host/opt/cni/bin/\\\\n2026-01-09T13:30:47Z [verbose] multus-daemon started\\\\n2026-01-09T13:30:47Z [verbose] Readiness Indicator file check\\\\n2026-01-09T13:31:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.786766 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.804578 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d79db8-b1e1-43cb-b39f-aea72914778d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d03488cb3bf92b2cf5ae2daac3b83d4925c14e6bbf4789a0ed00e4caf275a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.825271 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.835272 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.835322 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.835336 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.835361 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.835378 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:50Z","lastTransitionTime":"2026-01-09T13:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.845688 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.867404 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.887934 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.907521 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.925452 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.939834 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.939945 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.940014 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.940093 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.940153 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:50Z","lastTransitionTime":"2026-01-09T13:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.953195 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70a3367-0b6c-464c-84c2-5ddc03627c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b1c517e5ba5a7c13919a030e1df61e0a4cc5d89e2b80a2464484387a713d5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://744c9ccecaab78f62335d29db2d18fe4e64b26c28dcd365985f11db160641b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cae117720dbdc97e6a913c5125978e3f4ec7f01dec42baab8b5fc74e2852db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74fc6258740a4e5407f1d22189c536019faf85e5fc1c5b698938ceda3c5659f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a70c88bf2025bf78bf359717df98bdab692e5554a2a1a4146b228d7fbf5dee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f58523d9d4832ebc703441bba8fda6beee24e80b7e364faea23c0c4275cd9c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f58523d9d4832ebc703441bba8fda6beee24e80b7e364faea23c0c4275cd9c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53b8c9deabab605617276a16ba1a63aedfe81246b0d97f575ceb0ecea929efa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53b8c9deabab605617276a16ba1a63aedfe81246b0d97f575ceb0ecea929efa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://30cd0abf139e3111a44e517d28e6fd1b81a96a6481f8a9941361b10bc55da501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cd0abf139e3111a44e517d28e6fd1b81a96a6481f8a9941361b10bc55da501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.977192 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:50 crc kubenswrapper[4919]: I0109 13:31:50.995795 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:50Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.010792 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.043559 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.043636 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.043659 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.043692 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.043715 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:51Z","lastTransitionTime":"2026-01-09T13:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.045716 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91ab55c28499edc33ffdd00414823b7319531fbdf5edc0bc8d28cb3ad30265ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:12Z\\\",\\\"message\\\":\\\"\\\\nI0109 13:31:11.711533 6593 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0109 13:31:11.711539 6593 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0109 13:31:11.711614 6593 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0109 13:31:11.712633 6593 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 13:31:11.712650 6593 handler.go:208] Removed *v1.Node event handler 7\\\\nI0109 13:31:11.712661 6593 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0109 13:31:11.712738 6593 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0109 13:31:11.712799 6593 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0109 13:31:11.712866 6593 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0109 13:31:11.712926 6593 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 13:31:11.712950 6593 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0109 13:31:11.712951 6593 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0109 13:31:11.712930 6593 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0109 13:31:11.712998 6593 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0109 13:31:11.713041 6593 factory.go:656] Stopping watch factory\\\\nI0109 13:31:11.713073 6593 ovnkube.go:599] Stopped ovnkube\\\\nI0109 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:42Z\\\",\\\"message\\\":\\\"ort:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0109 13:31:42.322963 7017 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:42Z is after 2025-08-24T17:21:41Z]\\\\nI0109 13:31:42.323058 7017 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0109 13:31:42.323036 7017 model_client.go:382] Up\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.065845 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.088279 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.107439 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.121531 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.139959 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:51Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.147184 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.147443 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.147628 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.147811 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.148003 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:51Z","lastTransitionTime":"2026-01-09T13:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.251123 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.251161 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.251171 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.251190 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.251201 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:51Z","lastTransitionTime":"2026-01-09T13:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.353548 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.353609 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.353627 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.353652 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.353671 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:51Z","lastTransitionTime":"2026-01-09T13:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.457137 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.457263 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.457287 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.457322 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.457344 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:51Z","lastTransitionTime":"2026-01-09T13:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.560667 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.560746 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.560765 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.560798 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.560820 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:51Z","lastTransitionTime":"2026-01-09T13:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.664918 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.665026 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.665051 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.665093 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.665121 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:51Z","lastTransitionTime":"2026-01-09T13:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.751125 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.751333 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:51 crc kubenswrapper[4919]: E0109 13:31:51.751423 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.751537 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:51 crc kubenswrapper[4919]: E0109 13:31:51.751695 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:51 crc kubenswrapper[4919]: E0109 13:31:51.751780 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.768613 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.768663 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.768679 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.768703 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.768720 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:51Z","lastTransitionTime":"2026-01-09T13:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.872402 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.872483 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.872496 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.872519 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.872534 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:51Z","lastTransitionTime":"2026-01-09T13:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.975863 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.975932 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.975956 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.975984 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:51 crc kubenswrapper[4919]: I0109 13:31:51.976003 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:51Z","lastTransitionTime":"2026-01-09T13:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.079149 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.079193 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.079202 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.079240 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.079251 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:52Z","lastTransitionTime":"2026-01-09T13:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.182027 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.182092 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.182114 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.182139 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.182157 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:52Z","lastTransitionTime":"2026-01-09T13:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.285153 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.285318 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.285396 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.285434 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.285460 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:52Z","lastTransitionTime":"2026-01-09T13:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.387821 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.387882 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.387900 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.387926 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.387944 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:52Z","lastTransitionTime":"2026-01-09T13:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.491997 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.492053 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.492077 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.492107 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.492127 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:52Z","lastTransitionTime":"2026-01-09T13:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.602326 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.602378 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.602397 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.602426 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.602445 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:52Z","lastTransitionTime":"2026-01-09T13:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.705949 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.706018 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.706035 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.706063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.706082 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:52Z","lastTransitionTime":"2026-01-09T13:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.750819 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:52 crc kubenswrapper[4919]: E0109 13:31:52.751063 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.809450 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.809520 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.809542 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.809571 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.809592 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:52Z","lastTransitionTime":"2026-01-09T13:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.912782 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.912880 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.912906 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.912945 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:52 crc kubenswrapper[4919]: I0109 13:31:52.912971 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:52Z","lastTransitionTime":"2026-01-09T13:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.017041 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.017112 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.017131 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.017160 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.017179 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:53Z","lastTransitionTime":"2026-01-09T13:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.120788 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.120868 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.120887 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.120916 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.120937 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:53Z","lastTransitionTime":"2026-01-09T13:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.224586 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.224658 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.224677 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.224706 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.224725 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:53Z","lastTransitionTime":"2026-01-09T13:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.327975 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.328071 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.328091 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.328119 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.328142 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:53Z","lastTransitionTime":"2026-01-09T13:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.432009 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.432122 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.432142 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.432166 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.432184 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:53Z","lastTransitionTime":"2026-01-09T13:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.535594 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.535676 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.535712 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.535733 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.535746 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:53Z","lastTransitionTime":"2026-01-09T13:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.638609 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.638680 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.638699 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.638726 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.638746 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:53Z","lastTransitionTime":"2026-01-09T13:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.742687 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.742747 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.742765 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.742790 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.742810 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:53Z","lastTransitionTime":"2026-01-09T13:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.751300 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.751341 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.751415 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:53 crc kubenswrapper[4919]: E0109 13:31:53.751605 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:53 crc kubenswrapper[4919]: E0109 13:31:53.751779 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:53 crc kubenswrapper[4919]: E0109 13:31:53.752062 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.753476 4919 scope.go:117] "RemoveContainer" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:31:53 crc kubenswrapper[4919]: E0109 13:31:53.753874 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.777540 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.793843 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.817797 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.842648 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.846055 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.846110 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.846130 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.846159 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.846178 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:53Z","lastTransitionTime":"2026-01-09T13:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.866435 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.883648 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.902068 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.920116 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.939394 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"2026-01-09T13:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17\\\\n2026-01-09T13:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17 to /host/opt/cni/bin/\\\\n2026-01-09T13:30:47Z [verbose] multus-daemon started\\\\n2026-01-09T13:30:47Z [verbose] Readiness Indicator file check\\\\n2026-01-09T13:31:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.949763 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.950041 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.950193 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.950435 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.950593 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:53Z","lastTransitionTime":"2026-01-09T13:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.958821 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.974893 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d79db8-b1e1-43cb-b39f-aea72914778d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d03488cb3bf92b2cf5ae2daac3b83d4925c14e6bbf4789a0ed00e4caf275a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:53 crc kubenswrapper[4919]: I0109 13:31:53.991990 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:53Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.013305 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.036436 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.053937 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.054009 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.054030 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.054061 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.054084 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:54Z","lastTransitionTime":"2026-01-09T13:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.071516 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:42Z\\\",\\\"message\\\":\\\"ort:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0109 13:31:42.322963 7017 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:42Z is after 2025-08-24T17:21:41Z]\\\\nI0109 13:31:42.323058 7017 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0109 13:31:42.323036 7017 model_client.go:382] Up\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.092201 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.111775 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.145678 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70a3367-0b6c-464c-84c2-5ddc03627c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b1c517e5ba5a7c13919a030e1df61e0a4cc5d89e2b80a2464484387a713d5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://744c9ccecaab78f62335d29db2d18fe4e64b26c28dcd365985f11db160641b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cae117720dbdc97e6a913c5125978e3f4ec7f01dec42baab8b5fc74e2852db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74fc6258740a4e5407f1d22189c536019faf85e5fc1c5b698938ceda3c5659f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a70c88bf2025bf78bf359717df98bdab692e5554a2a1a4146b228d7fbf5dee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f58523d9d4832ebc703441bba8fda6beee24e80b7e364faea23c0c4275cd9c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f58523d9d4832ebc703441bba8fda6beee24e80b7e364faea23c0c4275cd9c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53b8c9deabab605617276a16ba1a63aedfe81246b0d97f575ceb0ecea929efa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53b8c9deabab605617276a16ba1a63aedfe81246b0d97f575ceb0ecea929efa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://30cd0abf139e3111a44e517d28e6fd1b81a96a6481f8a9941361b10bc55da501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cd0abf139e3111a44e517d28e6fd1b81a96a6481f8a9941361b10bc55da501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.158553 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.158623 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.158640 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.158671 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.158690 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:54Z","lastTransitionTime":"2026-01-09T13:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.169621 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:54Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.261780 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.261837 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.261859 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.261889 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.261913 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:54Z","lastTransitionTime":"2026-01-09T13:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.365568 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.365634 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.365657 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.365686 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.365707 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:54Z","lastTransitionTime":"2026-01-09T13:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.468678 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.468756 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.468778 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.468808 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.468828 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:54Z","lastTransitionTime":"2026-01-09T13:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.571795 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.571871 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.571896 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.571929 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.571956 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:54Z","lastTransitionTime":"2026-01-09T13:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.675179 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.675275 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.675289 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.675308 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.675319 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:54Z","lastTransitionTime":"2026-01-09T13:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.750949 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:54 crc kubenswrapper[4919]: E0109 13:31:54.751196 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.778574 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.778644 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.778662 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.778689 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.778708 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:54Z","lastTransitionTime":"2026-01-09T13:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.884124 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.884297 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.884383 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.884415 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.884434 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:54Z","lastTransitionTime":"2026-01-09T13:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.987903 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.987969 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.987985 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.988012 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:54 crc kubenswrapper[4919]: I0109 13:31:54.988031 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:54Z","lastTransitionTime":"2026-01-09T13:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.091416 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.091494 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.091523 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.091559 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.091584 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:55Z","lastTransitionTime":"2026-01-09T13:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.194383 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.194444 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.194463 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.194489 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.194518 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:55Z","lastTransitionTime":"2026-01-09T13:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.297427 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.297479 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.297490 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.297511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.297523 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:55Z","lastTransitionTime":"2026-01-09T13:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.401368 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.401796 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.401953 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.402186 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.402427 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:55Z","lastTransitionTime":"2026-01-09T13:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.506321 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.506386 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.506405 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.506432 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.506454 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:55Z","lastTransitionTime":"2026-01-09T13:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.610322 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.610711 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.610800 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.610937 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.611088 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:55Z","lastTransitionTime":"2026-01-09T13:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.714540 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.714956 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.715097 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.715296 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.715488 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:55Z","lastTransitionTime":"2026-01-09T13:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.750923 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.751015 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:55 crc kubenswrapper[4919]: E0109 13:31:55.751179 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:55 crc kubenswrapper[4919]: E0109 13:31:55.751372 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.752100 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:55 crc kubenswrapper[4919]: E0109 13:31:55.752448 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.820308 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.820383 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.820405 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.820437 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.820463 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:55Z","lastTransitionTime":"2026-01-09T13:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.924598 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.924673 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.924687 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.924710 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:55 crc kubenswrapper[4919]: I0109 13:31:55.924724 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:55Z","lastTransitionTime":"2026-01-09T13:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.028928 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.029005 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.029026 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.029054 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.029071 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:56Z","lastTransitionTime":"2026-01-09T13:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.131841 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.131890 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.131903 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.131926 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.131939 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:56Z","lastTransitionTime":"2026-01-09T13:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.235013 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.235066 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.235081 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.235102 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.235115 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:56Z","lastTransitionTime":"2026-01-09T13:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.338703 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.338768 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.338786 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.338816 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.338836 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:56Z","lastTransitionTime":"2026-01-09T13:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.442416 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.442495 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.442513 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.442539 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.442558 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:56Z","lastTransitionTime":"2026-01-09T13:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.545441 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.545500 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.545518 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.545582 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.545603 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:56Z","lastTransitionTime":"2026-01-09T13:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.649253 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.649338 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.649375 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.649408 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.649431 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:56Z","lastTransitionTime":"2026-01-09T13:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.750804 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:56 crc kubenswrapper[4919]: E0109 13:31:56.751380 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.754070 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.754111 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.754119 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.754135 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.754145 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:56Z","lastTransitionTime":"2026-01-09T13:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.857690 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.857815 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.857828 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.857849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.857862 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:56Z","lastTransitionTime":"2026-01-09T13:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.961095 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.961179 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.961194 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.961241 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:56 crc kubenswrapper[4919]: I0109 13:31:56.961258 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:56Z","lastTransitionTime":"2026-01-09T13:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.064708 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.064796 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.064816 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.064849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.064877 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:57Z","lastTransitionTime":"2026-01-09T13:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.168859 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.168947 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.168973 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.169007 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.169031 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:57Z","lastTransitionTime":"2026-01-09T13:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.272543 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.272674 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.272717 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.272755 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.272781 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:57Z","lastTransitionTime":"2026-01-09T13:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.376587 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.376650 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.376671 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.376700 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.376722 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:57Z","lastTransitionTime":"2026-01-09T13:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.479839 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.479953 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.479974 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.480043 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.480063 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:57Z","lastTransitionTime":"2026-01-09T13:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.584164 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.584301 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.584325 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.584356 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.584376 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:57Z","lastTransitionTime":"2026-01-09T13:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.688411 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.688515 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.688535 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.688566 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.688585 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:57Z","lastTransitionTime":"2026-01-09T13:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.751381 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.751437 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.751444 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:57 crc kubenswrapper[4919]: E0109 13:31:57.751646 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:57 crc kubenswrapper[4919]: E0109 13:31:57.751921 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:57 crc kubenswrapper[4919]: E0109 13:31:57.751950 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.791954 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.792016 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.792039 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.792072 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.792097 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:57Z","lastTransitionTime":"2026-01-09T13:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.896034 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.896115 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.896138 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.896171 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:57 crc kubenswrapper[4919]: I0109 13:31:57.896192 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:57Z","lastTransitionTime":"2026-01-09T13:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:57.999945 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.000006 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.000027 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.000054 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.000073 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.102707 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.102781 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.102810 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.102841 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.102861 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.205971 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.206043 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.206064 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.206096 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.206119 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.309390 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.309428 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.309437 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.309452 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.309461 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.412184 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.412250 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.412260 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.412273 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.412282 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.515633 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.515692 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.515704 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.515723 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.515734 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.619789 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.619835 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.619846 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.619861 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.619872 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.723023 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.723092 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.723112 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.723142 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.723160 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.751378 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:31:58 crc kubenswrapper[4919]: E0109 13:31:58.751502 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.777994 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.778033 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.778050 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.778067 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.778079 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: E0109 13:31:58.797084 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.802821 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.802892 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.802916 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.802949 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.802975 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: E0109 13:31:58.824143 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.830411 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.830545 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.830570 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.830603 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.830625 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: E0109 13:31:58.851883 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.857782 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.857860 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.857888 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.857922 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.857945 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: E0109 13:31:58.878573 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.883886 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.883931 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.883945 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.883969 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.883985 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:58 crc kubenswrapper[4919]: E0109 13:31:58.904246 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:58Z is after 2025-08-24T17:21:41Z" Jan 09 13:31:58 crc kubenswrapper[4919]: E0109 13:31:58.904514 4919 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.906751 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.906825 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.906838 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.906860 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:58 crc kubenswrapper[4919]: I0109 13:31:58.906874 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:58Z","lastTransitionTime":"2026-01-09T13:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.011055 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.011144 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.011164 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.011380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.011410 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:59Z","lastTransitionTime":"2026-01-09T13:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.115054 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.115129 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.115146 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.115176 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.115197 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:59Z","lastTransitionTime":"2026-01-09T13:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.218965 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.219041 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.219062 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.219096 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.219122 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:59Z","lastTransitionTime":"2026-01-09T13:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.322423 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.322505 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.322530 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.322566 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.322591 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:59Z","lastTransitionTime":"2026-01-09T13:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.425521 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.425600 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.425619 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.425648 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.425667 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:59Z","lastTransitionTime":"2026-01-09T13:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.528695 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.529016 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.529107 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.529137 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.529863 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:59Z","lastTransitionTime":"2026-01-09T13:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.634988 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.635108 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.635128 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.635160 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.635182 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:59Z","lastTransitionTime":"2026-01-09T13:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.739165 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.739314 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.739345 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.739380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.739404 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:59Z","lastTransitionTime":"2026-01-09T13:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.751427 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.751477 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.751527 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:31:59 crc kubenswrapper[4919]: E0109 13:31:59.751963 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:31:59 crc kubenswrapper[4919]: E0109 13:31:59.752175 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:31:59 crc kubenswrapper[4919]: E0109 13:31:59.752401 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.842825 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.842891 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.842910 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.842939 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.842960 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:59Z","lastTransitionTime":"2026-01-09T13:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.946779 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.946876 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.946900 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.946934 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:31:59 crc kubenswrapper[4919]: I0109 13:31:59.946960 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:31:59Z","lastTransitionTime":"2026-01-09T13:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.050499 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.050569 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.050583 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.050606 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.050622 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:00Z","lastTransitionTime":"2026-01-09T13:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.153738 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.153795 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.153804 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.153820 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.153855 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:00Z","lastTransitionTime":"2026-01-09T13:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.256434 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.256492 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.256510 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.256532 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.256550 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:00Z","lastTransitionTime":"2026-01-09T13:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.358718 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.358794 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.358819 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.358848 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.358868 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:00Z","lastTransitionTime":"2026-01-09T13:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.462144 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.462265 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.462298 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.462333 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.462360 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:00Z","lastTransitionTime":"2026-01-09T13:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.565012 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.565078 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.565095 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.565117 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.565131 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:00Z","lastTransitionTime":"2026-01-09T13:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.669095 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.669163 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.669180 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.669205 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.669254 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:00Z","lastTransitionTime":"2026-01-09T13:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.751795 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:00 crc kubenswrapper[4919]: E0109 13:32:00.752023 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.764990 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.774071 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.774110 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.774128 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.774149 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.774166 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:00Z","lastTransitionTime":"2026-01-09T13:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.778798 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d79db8-b1e1-43cb-b39f-aea72914778d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d03488cb3bf92b2cf5ae2daac3b83d4925c14e6bbf4789a0ed00e4caf275a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.797554 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.810447 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.822109 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.835470 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.854408 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"2026-01-09T13:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17\\\\n2026-01-09T13:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17 to /host/opt/cni/bin/\\\\n2026-01-09T13:30:47Z [verbose] multus-daemon started\\\\n2026-01-09T13:30:47Z [verbose] Readiness Indicator file check\\\\n2026-01-09T13:31:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.870923 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.876603 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.876665 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.876680 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.876698 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.876712 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:00Z","lastTransitionTime":"2026-01-09T13:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.905862 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70a3367-0b6c-464c-84c2-5ddc03627c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b1c517e5ba5a7c13919a030e1df61e0a4cc5d89e2b80a2464484387a713d5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://744c9ccecaab78f62335d29db2d18fe4e64b26c28dcd365985f11db160641b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cae117720dbdc97e6a913c5125978e3f4ec7f01dec42baab8b5fc74e2852db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74fc6258740a4e5407f1d22189c536019faf85e5fc1c5b698938ceda3c5659f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a70c88bf2025bf78bf359717df98bdab692e5554a2a1a4146b228d7fbf5dee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f58523d9d4832ebc703441bba8fda6beee24e80b7e364faea23c0c4275cd9c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f58523d9d4832ebc703441bba8fda6beee24e80b7e364faea23c0c4275cd9c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53b8c9deabab605617276a16ba1a63aedfe81246b0d97f575ceb0ecea929efa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53b8c9deabab605617276a16ba1a63aedfe81246b0d97f575ceb0ecea929efa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://30cd0abf139e3111a44e517d28e6fd1b81a96a6481f8a9941361b10bc55da501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cd0abf139e3111a44e517d28e6fd1b81a96a6481f8a9941361b10bc55da501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.927053 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.946587 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.965155 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.978390 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.978420 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.978431 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.978449 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.978460 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:00Z","lastTransitionTime":"2026-01-09T13:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:00 crc kubenswrapper[4919]: I0109 13:32:00.993964 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:42Z\\\",\\\"message\\\":\\\"ort:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0109 13:31:42.322963 7017 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:42Z is after 2025-08-24T17:21:41Z]\\\\nI0109 13:31:42.323058 7017 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0109 13:31:42.323036 7017 model_client.go:382] Up\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:00Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.006632 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.021761 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.037384 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.061089 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.079917 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.084153 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.084308 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.084382 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.084445 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.084508 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:01Z","lastTransitionTime":"2026-01-09T13:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.099625 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:01Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.186909 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.187007 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.187029 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.187063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.187083 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:01Z","lastTransitionTime":"2026-01-09T13:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.290269 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.290311 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.290338 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.290358 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.290374 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:01Z","lastTransitionTime":"2026-01-09T13:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.393189 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.393574 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.393732 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.393873 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.394031 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:01Z","lastTransitionTime":"2026-01-09T13:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.497034 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.497090 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.497112 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.497143 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.497164 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:01Z","lastTransitionTime":"2026-01-09T13:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.600036 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.600099 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.600116 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.600141 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.600157 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:01Z","lastTransitionTime":"2026-01-09T13:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.703178 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.703250 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.703260 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.703278 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.703287 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:01Z","lastTransitionTime":"2026-01-09T13:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.751710 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.751793 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:01 crc kubenswrapper[4919]: E0109 13:32:01.751930 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:01 crc kubenswrapper[4919]: E0109 13:32:01.752068 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.752531 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:01 crc kubenswrapper[4919]: E0109 13:32:01.752753 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.806056 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.806138 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.806155 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.806185 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.806205 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:01Z","lastTransitionTime":"2026-01-09T13:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.909297 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.909884 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.909974 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.910072 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:01 crc kubenswrapper[4919]: I0109 13:32:01.910137 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:01Z","lastTransitionTime":"2026-01-09T13:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.014184 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.014546 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.014624 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.014713 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.014776 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:02Z","lastTransitionTime":"2026-01-09T13:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.117155 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.117199 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.117229 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.117247 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.117259 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:02Z","lastTransitionTime":"2026-01-09T13:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.224374 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.224416 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.224425 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.224442 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.224455 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:02Z","lastTransitionTime":"2026-01-09T13:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.328228 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.328273 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.328285 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.328303 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.328316 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:02Z","lastTransitionTime":"2026-01-09T13:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.431477 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.431987 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.432207 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.432426 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.432630 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:02Z","lastTransitionTime":"2026-01-09T13:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.535117 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.535506 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.535590 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.535694 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.535790 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:02Z","lastTransitionTime":"2026-01-09T13:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.639131 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.639274 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.639301 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.639332 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.639354 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:02Z","lastTransitionTime":"2026-01-09T13:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.742947 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.743035 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.743062 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.743097 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.743122 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:02Z","lastTransitionTime":"2026-01-09T13:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.751883 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:02 crc kubenswrapper[4919]: E0109 13:32:02.752095 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.846247 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.846308 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.846334 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.846365 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.846389 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:02Z","lastTransitionTime":"2026-01-09T13:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.949178 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.949624 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.949778 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.949931 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:02 crc kubenswrapper[4919]: I0109 13:32:02.950065 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:02Z","lastTransitionTime":"2026-01-09T13:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.053540 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.054544 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.054568 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.054612 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.054631 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:03Z","lastTransitionTime":"2026-01-09T13:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.162662 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.162766 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.162794 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.162832 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.162859 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:03Z","lastTransitionTime":"2026-01-09T13:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.265812 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.265849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.265862 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.265880 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.265891 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:03Z","lastTransitionTime":"2026-01-09T13:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.369003 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.369041 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.369053 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.369070 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.369081 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:03Z","lastTransitionTime":"2026-01-09T13:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.471582 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.471617 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.471627 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.471641 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.471651 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:03Z","lastTransitionTime":"2026-01-09T13:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.574879 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.574947 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.574966 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.574991 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.575009 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:03Z","lastTransitionTime":"2026-01-09T13:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.677772 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.677864 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.677883 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.677912 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.677933 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:03Z","lastTransitionTime":"2026-01-09T13:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.750928 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.751037 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.750928 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:03 crc kubenswrapper[4919]: E0109 13:32:03.751289 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:03 crc kubenswrapper[4919]: E0109 13:32:03.751406 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:03 crc kubenswrapper[4919]: E0109 13:32:03.751566 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.783032 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.783109 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.783136 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.783169 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.783268 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:03Z","lastTransitionTime":"2026-01-09T13:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.886419 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.886472 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.886483 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.886501 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.886512 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:03Z","lastTransitionTime":"2026-01-09T13:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.991040 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.991108 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.991127 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.991153 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:03 crc kubenswrapper[4919]: I0109 13:32:03.991174 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:03Z","lastTransitionTime":"2026-01-09T13:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.057980 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:04 crc kubenswrapper[4919]: E0109 13:32:04.058185 4919 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:32:04 crc kubenswrapper[4919]: E0109 13:32:04.058310 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs podName:7a2e9878-6b0e-4328-a3ca-9f828fb105c9 nodeName:}" failed. No retries permitted until 2026-01-09 13:33:08.058286327 +0000 UTC m=+167.606125817 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs") pod "network-metrics-daemon-xkhdz" (UID: "7a2e9878-6b0e-4328-a3ca-9f828fb105c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.094681 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.094764 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.094793 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.094830 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.094891 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:04Z","lastTransitionTime":"2026-01-09T13:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.198748 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.198810 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.198828 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.198856 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.198885 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:04Z","lastTransitionTime":"2026-01-09T13:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.303284 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.303381 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.303406 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.303437 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.303457 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:04Z","lastTransitionTime":"2026-01-09T13:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.407315 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.407381 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.407393 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.407450 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.407464 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:04Z","lastTransitionTime":"2026-01-09T13:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.510425 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.510504 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.510524 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.510555 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.510575 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:04Z","lastTransitionTime":"2026-01-09T13:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.614536 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.614635 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.614655 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.614686 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.614705 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:04Z","lastTransitionTime":"2026-01-09T13:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.718749 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.718790 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.718803 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.718823 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.718835 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:04Z","lastTransitionTime":"2026-01-09T13:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.751794 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:04 crc kubenswrapper[4919]: E0109 13:32:04.752010 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.822398 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.822450 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.822461 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.822483 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.822499 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:04Z","lastTransitionTime":"2026-01-09T13:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.926378 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.926472 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.926488 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.926512 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:04 crc kubenswrapper[4919]: I0109 13:32:04.926526 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:04Z","lastTransitionTime":"2026-01-09T13:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.030065 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.030138 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.030159 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.030188 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.030238 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:05Z","lastTransitionTime":"2026-01-09T13:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.133824 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.134345 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.134397 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.134426 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.134748 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:05Z","lastTransitionTime":"2026-01-09T13:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.238442 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.238507 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.238529 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.238559 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.238580 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:05Z","lastTransitionTime":"2026-01-09T13:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.341411 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.341527 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.341552 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.341581 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.341598 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:05Z","lastTransitionTime":"2026-01-09T13:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.444332 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.444393 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.444411 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.444435 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.444454 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:05Z","lastTransitionTime":"2026-01-09T13:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.547759 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.547810 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.547825 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.547848 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.547866 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:05Z","lastTransitionTime":"2026-01-09T13:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.651481 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.651567 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.651589 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.651620 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.651645 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:05Z","lastTransitionTime":"2026-01-09T13:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.750735 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.750791 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.750934 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:05 crc kubenswrapper[4919]: E0109 13:32:05.751270 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:05 crc kubenswrapper[4919]: E0109 13:32:05.751440 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:05 crc kubenswrapper[4919]: E0109 13:32:05.751551 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.755440 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.755690 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.755821 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.755969 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.756099 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:05Z","lastTransitionTime":"2026-01-09T13:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.860055 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.860129 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.860149 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.860177 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.860199 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:05Z","lastTransitionTime":"2026-01-09T13:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.963315 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.963384 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.963404 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.963434 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:05 crc kubenswrapper[4919]: I0109 13:32:05.963455 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:05Z","lastTransitionTime":"2026-01-09T13:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.066791 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.066869 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.066891 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.066924 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.066948 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:06Z","lastTransitionTime":"2026-01-09T13:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.169998 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.170074 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.170091 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.170119 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.170140 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:06Z","lastTransitionTime":"2026-01-09T13:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.275063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.275134 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.275151 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.275181 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.275201 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:06Z","lastTransitionTime":"2026-01-09T13:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.378984 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.379065 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.379085 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.379112 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.379131 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:06Z","lastTransitionTime":"2026-01-09T13:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.482757 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.482840 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.482868 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.483095 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.483119 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:06Z","lastTransitionTime":"2026-01-09T13:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.587436 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.587521 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.587545 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.587572 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.587595 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:06Z","lastTransitionTime":"2026-01-09T13:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.691761 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.691832 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.691853 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.691908 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.691929 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:06Z","lastTransitionTime":"2026-01-09T13:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.751401 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:06 crc kubenswrapper[4919]: E0109 13:32:06.752418 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.795087 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.795168 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.795187 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.795254 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.795275 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:06Z","lastTransitionTime":"2026-01-09T13:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.898604 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.898695 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.898715 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.898747 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:06 crc kubenswrapper[4919]: I0109 13:32:06.898769 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:06Z","lastTransitionTime":"2026-01-09T13:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.001601 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.001661 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.001679 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.001706 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.001730 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:07Z","lastTransitionTime":"2026-01-09T13:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.105316 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.105387 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.105410 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.105441 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.105464 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:07Z","lastTransitionTime":"2026-01-09T13:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.209377 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.209440 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.209457 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.209487 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.209521 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:07Z","lastTransitionTime":"2026-01-09T13:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.312704 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.312774 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.312794 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.312821 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.312841 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:07Z","lastTransitionTime":"2026-01-09T13:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.415915 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.415974 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.416001 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.416030 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.416063 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:07Z","lastTransitionTime":"2026-01-09T13:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.519085 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.519157 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.519182 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.519238 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.519258 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:07Z","lastTransitionTime":"2026-01-09T13:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.622697 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.622765 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.622787 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.622819 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.622842 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:07Z","lastTransitionTime":"2026-01-09T13:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.726788 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.726847 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.726864 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.726891 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.726909 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:07Z","lastTransitionTime":"2026-01-09T13:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.751027 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.751447 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.751628 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:07 crc kubenswrapper[4919]: E0109 13:32:07.751672 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:07 crc kubenswrapper[4919]: E0109 13:32:07.752045 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.752095 4919 scope.go:117] "RemoveContainer" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:32:07 crc kubenswrapper[4919]: E0109 13:32:07.752163 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:07 crc kubenswrapper[4919]: E0109 13:32:07.752647 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.829776 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.829844 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.829862 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.829888 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.829907 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:07Z","lastTransitionTime":"2026-01-09T13:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.934722 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.935187 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.935242 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.935273 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:07 crc kubenswrapper[4919]: I0109 13:32:07.935293 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:07Z","lastTransitionTime":"2026-01-09T13:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.039945 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.040059 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.040094 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.040129 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.040163 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:08Z","lastTransitionTime":"2026-01-09T13:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.144400 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.144478 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.144500 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.144534 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.144559 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:08Z","lastTransitionTime":"2026-01-09T13:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.247781 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.247849 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.247868 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.247891 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.247910 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:08Z","lastTransitionTime":"2026-01-09T13:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.351306 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.351457 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.351479 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.351505 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.351525 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:08Z","lastTransitionTime":"2026-01-09T13:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.469730 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.469799 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.469818 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.469845 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.469866 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:08Z","lastTransitionTime":"2026-01-09T13:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.573424 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.573501 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.573520 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.573582 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.573606 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:08Z","lastTransitionTime":"2026-01-09T13:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.677507 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.677927 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.678075 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.678257 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.678400 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:08Z","lastTransitionTime":"2026-01-09T13:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.750917 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:08 crc kubenswrapper[4919]: E0109 13:32:08.751150 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.782003 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.782069 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.782088 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.782116 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.782141 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:08Z","lastTransitionTime":"2026-01-09T13:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.886415 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.886556 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.886581 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.886620 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.886655 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:08Z","lastTransitionTime":"2026-01-09T13:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.991337 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.991442 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.991460 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.991491 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:08 crc kubenswrapper[4919]: I0109 13:32:08.991511 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:08Z","lastTransitionTime":"2026-01-09T13:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.096364 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.096451 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.096470 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.096497 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.096519 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.200103 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.200203 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.200262 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.200295 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.200321 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.234128 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.234241 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.234270 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.234311 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.234338 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: E0109 13:32:09.260263 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:09Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.266474 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.266595 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.266618 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.266646 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.266667 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: E0109 13:32:09.289080 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:09Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.295946 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.296029 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.296048 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.296080 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.296103 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: E0109 13:32:09.317981 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:09Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.323925 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.323989 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.324011 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.324037 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.324055 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: E0109 13:32:09.346138 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:09Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.351551 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.351611 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.351630 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.351655 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.351674 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: E0109 13:32:09.370712 4919 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T13:32:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a043b745-924a-464c-80aa-f4df877f55bf\\\",\\\"systemUUID\\\":\\\"4cea77be-9aeb-4181-a0b4-b60e5a362fd9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:09Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:09 crc kubenswrapper[4919]: E0109 13:32:09.370930 4919 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.373127 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.373231 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.373265 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.373297 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.373316 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.476702 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.476759 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.476776 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.476803 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.476821 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.579810 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.579895 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.579919 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.579950 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.579982 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.685027 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.685101 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.685118 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.685148 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.685167 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.750714 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.750740 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:09 crc kubenswrapper[4919]: E0109 13:32:09.750937 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.750740 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:09 crc kubenswrapper[4919]: E0109 13:32:09.751083 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:09 crc kubenswrapper[4919]: E0109 13:32:09.751302 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.788350 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.788424 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.788441 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.788469 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.788488 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.891895 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.891984 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.892004 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.892032 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.892052 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.996643 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.996734 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.996760 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.996795 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:09 crc kubenswrapper[4919]: I0109 13:32:09.996818 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:09Z","lastTransitionTime":"2026-01-09T13:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.101021 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.101079 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.101097 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.101130 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.101149 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:10Z","lastTransitionTime":"2026-01-09T13:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.205022 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.205070 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.205087 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.205112 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.205132 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:10Z","lastTransitionTime":"2026-01-09T13:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.308846 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.308906 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.308924 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.308949 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.308973 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:10Z","lastTransitionTime":"2026-01-09T13:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.411984 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.412070 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.412088 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.412116 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.412140 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:10Z","lastTransitionTime":"2026-01-09T13:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.516097 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.516148 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.516160 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.516179 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.516192 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:10Z","lastTransitionTime":"2026-01-09T13:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.618834 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.618908 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.618932 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.618966 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.618995 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:10Z","lastTransitionTime":"2026-01-09T13:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.722112 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.722201 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.722263 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.722294 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.722316 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:10Z","lastTransitionTime":"2026-01-09T13:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.751615 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:10 crc kubenswrapper[4919]: E0109 13:32:10.751894 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.770407 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d79db8-b1e1-43cb-b39f-aea72914778d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d03488cb3bf92b2cf5ae2daac3b83d4925c14e6bbf4789a0ed00e4caf275a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8a131a5c3b7ddf092cba3a77f0ed07915fd0d2145eae04906963ab88d015f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.796672 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a055b487-5d63-4265-ac12-735612354e73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T13:30:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0109 13:30:33.398307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 13:30:33.400005 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4019686595/tls.crt::/tmp/serving-cert-4019686595/tls.key\\\\\\\"\\\\nI0109 13:30:43.320927 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 13:30:43.328283 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 13:30:43.328361 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 13:30:43.328421 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 13:30:43.328435 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 13:30:43.338653 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 13:30:43.338748 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 13:30:43.338785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 13:30:43.338794 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 13:30:43.338801 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 13:30:43.338808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 13:30:43.339373 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 13:30:43.343716 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.815608 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9797b243-6d0f-4f8b-8b3d-b92ac439e3bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d15e612b4abcc61c356602fa521bd156a5e2f5b1e89bbf48b2bceac8a06fbca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d24ffabc3436ac75e2611506f1d4d40faed59e4fa4c618523275331408bb219d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae0a71cfd94d80d04efad2c5671e1a6422ee373da4fc7ab38e36198e3fcad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5ef78571904b7b98f20f7eb25e83def8b64fc7e86e3c376ee2c6a00334e8667\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.826118 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.826151 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.826163 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.826181 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.826193 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:10Z","lastTransitionTime":"2026-01-09T13:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.834591 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c494f87fb3dc8455ed5de9b4073f1b69d9d96c3b2a4d260cb5d57b9df1e825e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.853394 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81e6730e2d4c2375d62687a3919410fd7440dc63ec931c008941ab1882e625c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n299m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9m5lv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.878021 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgw8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11e19b4a-0888-460f-bf97-5dd0ddda6e8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:32Z\\\",\\\"message\\\":\\\"2026-01-09T13:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17\\\\n2026-01-09T13:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c242d58-4d9b-4293-8565-97eb1a2c9c17 to /host/opt/cni/bin/\\\\n2026-01-09T13:30:47Z [verbose] multus-daemon started\\\\n2026-01-09T13:30:47Z [verbose] Readiness Indicator file check\\\\n2026-01-09T13:31:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srz24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgw8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.895421 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrkwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:31:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xkhdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.928462 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.928501 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.928513 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.928530 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.928543 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:10Z","lastTransitionTime":"2026-01-09T13:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.950479 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70a3367-0b6c-464c-84c2-5ddc03627c0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b1c517e5ba5a7c13919a030e1df61e0a4cc5d89e2b80a2464484387a713d5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://744c9ccecaab78f62335d29db2d18fe4e64b26c28dcd365985f11db160641b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cae117720dbdc97e6a913c5125978e3f4ec7f01dec42baab8b5fc74e2852db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74fc6258740a4e5407f1d22189c536019faf85e5fc1c5b698938ceda3c5659f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a70c88bf2025bf78bf359717df98bdab692e5554a2a1a4146b228d7fbf5dee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f58523d9d4832ebc703441bba8fda6beee24e80b7e364faea23c0c4275cd9c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f58523d9d4832ebc703441bba8fda6beee24e80b7e364faea23c0c4275cd9c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53b8c9deabab605617276a16ba1a63aedfe81246b0d97f575ceb0ecea929efa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53b8c9deabab605617276a16ba1a63aedfe81246b0d97f575ceb0ecea929efa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://30cd0abf139e3111a44e517d28e6fd1b81a96a6481f8a9941361b10bc55da501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cd0abf139e3111a44e517d28e6fd1b81a96a6481f8a9941361b10bc55da501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.975202 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa212ada-bfb5-4aeb-94bb-acf1f0afe319\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a58a444435270a569f4e25649cbfb81bb34700db968dcba7d85b9fdea9006bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dbb1933e5d51eec9dc5963fca537d46986cb88929f55683de676da7f0636133\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6694df4fa2976e9e741fe48d2f16c0b63b29ba80500e4238fc0ff6d6dad53bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:10 crc kubenswrapper[4919]: I0109 13:32:10.993552 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:10Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.009252 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f66189f627ddedba6b2bcb45f145ef53485a9b0b7401ca5fa09363145fce81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c247b3e6a38c79833e097898da1fe92f3dabc7dcd7b71a618506f054f4fd9c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.029102 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a11a9b6-2419-4f04-b35e-ba296d70b705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T13:31:42Z\\\",\\\"message\\\":\\\"ort:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0109 13:31:42.322963 7017 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:31:42Z is after 2025-08-24T17:21:41Z]\\\\nI0109 13:31:42.323058 7017 services_controller.go:452] Built service openshift-etcd/etcd per-node LB for network=default: []services.LB{}\\\\nI0109 13:31:42.323036 7017 model_client.go:382] Up\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T13:31:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6jvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w74hl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.031459 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.031527 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.031552 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.031582 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.031605 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:11Z","lastTransitionTime":"2026-01-09T13:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.041165 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bzs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd1e83cb-5a48-4331-b403-d7a07e8aa67f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://389ab6b02eef188a2713e27d93c70a64a1ab4ebcfe58634cf5266197b0375bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kd6n5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bzs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.057765 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a361336-2125-49a9-8332-eb66286dcdb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108cc929d3e1674b5cc9341c92e9d4f5142fc0d87212666efba8890341e8adc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd519645b9635f304f7af4e5e832eff6ae2964b35ed15d918bae7b85b51c1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-slm6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9s49l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.073371 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.090438 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d6c98ae6aa3041e47bac4782d73902dc39137b2fc8550efce479bfb9cdf37b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.106710 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.121659 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9z7cc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1115c0ba-16d5-4e81-a4b4-07ba7f360825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6522bc54695ade8637b22dbf9074c82430ce4c8deb8d5d9631933a1d49d5b4cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkvnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9z7cc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.134789 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.134864 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.134893 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.134932 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.134964 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:11Z","lastTransitionTime":"2026-01-09T13:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.143681 4919 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-97zdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21befbc8-9e98-4557-89af-a116cc8c484c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T13:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e336d165af12149ba05542f8cebe8a16c6fd66d3317d27c880b8969e7666691d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T13:30:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06e894e856400a580889df50d687482c582ba3ec3bd3b5087620e318cf8482bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://660f6644ce7d538bfbd8fa48433be7a9617057dc088a444c974cc5ca2b937260\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cff8a593cc95fef58c97a8581da1c333912d746adabc43c75596f113bf535bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d540c585264febdb7467f985f4dfdbf931a0ec42c42f5660310245c22901bc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458aeaca27fca436a56c5bb1f4180fc11cc5d27c53d33cd5b00f17e0e6f99322\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea2572c59e697886d3666518ed78dcbe1bde1dd6fc6255e3846d1b34f26cdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T13:30:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T13:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhh4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T13:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-97zdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T13:32:11Z is after 2025-08-24T17:21:41Z" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.238199 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.238289 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.238310 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.238339 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.238362 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:11Z","lastTransitionTime":"2026-01-09T13:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.341753 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.341832 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.341850 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.341874 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.341890 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:11Z","lastTransitionTime":"2026-01-09T13:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.445276 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.445340 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.445359 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.445385 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.445411 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:11Z","lastTransitionTime":"2026-01-09T13:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.548526 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.548581 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.548599 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.548624 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.548637 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:11Z","lastTransitionTime":"2026-01-09T13:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.652174 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.652288 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.652310 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.652342 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.652363 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:11Z","lastTransitionTime":"2026-01-09T13:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.751441 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.751506 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.751458 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:11 crc kubenswrapper[4919]: E0109 13:32:11.751729 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:11 crc kubenswrapper[4919]: E0109 13:32:11.751881 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:11 crc kubenswrapper[4919]: E0109 13:32:11.752185 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.756052 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.756097 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.756114 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.756141 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.756159 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:11Z","lastTransitionTime":"2026-01-09T13:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.859878 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.859947 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.859970 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.860006 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.860029 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:11Z","lastTransitionTime":"2026-01-09T13:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.963449 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.963511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.963531 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.963559 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:11 crc kubenswrapper[4919]: I0109 13:32:11.963584 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:11Z","lastTransitionTime":"2026-01-09T13:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.066940 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.067023 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.067046 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.067078 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.067101 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:12Z","lastTransitionTime":"2026-01-09T13:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.169987 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.170079 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.170103 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.170135 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.170159 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:12Z","lastTransitionTime":"2026-01-09T13:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.273196 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.273320 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.273348 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.273381 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.273403 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:12Z","lastTransitionTime":"2026-01-09T13:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.376174 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.376242 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.376254 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.376271 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.376283 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:12Z","lastTransitionTime":"2026-01-09T13:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.480156 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.480258 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.480281 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.480309 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.480331 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:12Z","lastTransitionTime":"2026-01-09T13:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.582534 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.582596 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.582615 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.582640 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.582659 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:12Z","lastTransitionTime":"2026-01-09T13:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.687314 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.687384 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.687403 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.687432 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.687451 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:12Z","lastTransitionTime":"2026-01-09T13:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.751340 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:12 crc kubenswrapper[4919]: E0109 13:32:12.751982 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.789797 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.789855 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.789874 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.789895 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.789913 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:12Z","lastTransitionTime":"2026-01-09T13:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.892959 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.893033 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.893054 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.893084 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.893106 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:12Z","lastTransitionTime":"2026-01-09T13:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.997185 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.997300 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.997331 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.997368 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:12 crc kubenswrapper[4919]: I0109 13:32:12.997394 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:12Z","lastTransitionTime":"2026-01-09T13:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.100205 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.100320 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.100346 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.100383 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.100408 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:13Z","lastTransitionTime":"2026-01-09T13:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.203884 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.203977 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.203999 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.204029 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.204048 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:13Z","lastTransitionTime":"2026-01-09T13:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.307391 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.307477 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.307498 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.307527 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.307548 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:13Z","lastTransitionTime":"2026-01-09T13:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.410934 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.411012 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.411036 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.411073 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.411096 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:13Z","lastTransitionTime":"2026-01-09T13:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.514298 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.514352 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.514369 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.514394 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.514412 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:13Z","lastTransitionTime":"2026-01-09T13:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.617000 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.617051 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.617063 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.617082 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.617097 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:13Z","lastTransitionTime":"2026-01-09T13:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.720268 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.720346 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.720365 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.720397 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.720486 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:13Z","lastTransitionTime":"2026-01-09T13:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.751688 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.751771 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:13 crc kubenswrapper[4919]: E0109 13:32:13.752051 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.752161 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:13 crc kubenswrapper[4919]: E0109 13:32:13.752440 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:13 crc kubenswrapper[4919]: E0109 13:32:13.753082 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.824375 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.824443 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.824475 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.824511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.824535 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:13Z","lastTransitionTime":"2026-01-09T13:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.928066 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.928142 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.928162 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.928192 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:13 crc kubenswrapper[4919]: I0109 13:32:13.928233 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:13Z","lastTransitionTime":"2026-01-09T13:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.031906 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.031969 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.031992 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.032019 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.032038 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:14Z","lastTransitionTime":"2026-01-09T13:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.134579 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.134627 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.134639 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.134657 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.134672 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:14Z","lastTransitionTime":"2026-01-09T13:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.237912 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.237977 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.237992 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.238014 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.238029 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:14Z","lastTransitionTime":"2026-01-09T13:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.340989 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.341036 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.341051 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.341071 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.341086 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:14Z","lastTransitionTime":"2026-01-09T13:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.444746 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.444802 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.444815 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.444838 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.444851 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:14Z","lastTransitionTime":"2026-01-09T13:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.547924 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.547989 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.548002 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.548024 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.548038 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:14Z","lastTransitionTime":"2026-01-09T13:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.651834 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.651904 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.651927 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.651959 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.651981 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:14Z","lastTransitionTime":"2026-01-09T13:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.751434 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:14 crc kubenswrapper[4919]: E0109 13:32:14.751693 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.754877 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.754931 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.754951 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.754975 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.754992 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:14Z","lastTransitionTime":"2026-01-09T13:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.858865 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.858936 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.858954 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.858982 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.859002 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:14Z","lastTransitionTime":"2026-01-09T13:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.962671 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.962725 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.962743 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.962770 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:14 crc kubenswrapper[4919]: I0109 13:32:14.962789 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:14Z","lastTransitionTime":"2026-01-09T13:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.066582 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.066661 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.066679 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.066708 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.066725 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:15Z","lastTransitionTime":"2026-01-09T13:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.170732 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.170844 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.170866 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.170895 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.170915 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:15Z","lastTransitionTime":"2026-01-09T13:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.274547 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.274660 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.275384 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.275432 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.275453 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:15Z","lastTransitionTime":"2026-01-09T13:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.378786 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.378835 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.378844 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.378864 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.378876 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:15Z","lastTransitionTime":"2026-01-09T13:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.481269 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.481339 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.481352 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.481375 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.481388 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:15Z","lastTransitionTime":"2026-01-09T13:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.584618 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.584674 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.584693 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.584716 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.584733 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:15Z","lastTransitionTime":"2026-01-09T13:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.688325 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.688387 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.688405 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.688436 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.688459 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:15Z","lastTransitionTime":"2026-01-09T13:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.751435 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.751583 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:15 crc kubenswrapper[4919]: E0109 13:32:15.751671 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.751435 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:15 crc kubenswrapper[4919]: E0109 13:32:15.751911 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:15 crc kubenswrapper[4919]: E0109 13:32:15.751974 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.792013 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.792086 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.792104 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.792134 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.792153 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:15Z","lastTransitionTime":"2026-01-09T13:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.895440 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.895532 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.895562 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.895598 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:15 crc kubenswrapper[4919]: I0109 13:32:15.895623 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:15Z","lastTransitionTime":"2026-01-09T13:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.001513 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.001954 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.002163 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.002440 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.002660 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:16Z","lastTransitionTime":"2026-01-09T13:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.105503 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.105584 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.105601 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.105627 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.105645 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:16Z","lastTransitionTime":"2026-01-09T13:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.209829 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.209880 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.209891 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.209911 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.209922 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:16Z","lastTransitionTime":"2026-01-09T13:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.312666 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.312727 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.312745 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.312772 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.312791 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:16Z","lastTransitionTime":"2026-01-09T13:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.416003 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.416075 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.416094 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.416121 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.416142 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:16Z","lastTransitionTime":"2026-01-09T13:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.520016 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.520081 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.520099 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.520129 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.520150 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:16Z","lastTransitionTime":"2026-01-09T13:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.623909 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.623974 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.623996 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.624038 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.624057 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:16Z","lastTransitionTime":"2026-01-09T13:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.727753 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.727829 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.727850 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.727880 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.727900 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:16Z","lastTransitionTime":"2026-01-09T13:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.751176 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:16 crc kubenswrapper[4919]: E0109 13:32:16.751413 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.830704 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.830770 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.830789 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.830818 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.830836 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:16Z","lastTransitionTime":"2026-01-09T13:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.934710 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.934823 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.934848 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.934884 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:16 crc kubenswrapper[4919]: I0109 13:32:16.934911 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:16Z","lastTransitionTime":"2026-01-09T13:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.038673 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.038756 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.038776 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.038805 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.038831 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:17Z","lastTransitionTime":"2026-01-09T13:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.142133 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.142192 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.142223 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.142250 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.142266 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:17Z","lastTransitionTime":"2026-01-09T13:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.245795 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.245840 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.245879 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.245896 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.246368 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:17Z","lastTransitionTime":"2026-01-09T13:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.349441 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.349477 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.349489 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.349504 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.349517 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:17Z","lastTransitionTime":"2026-01-09T13:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.452497 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.452551 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.452567 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.452589 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.452608 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:17Z","lastTransitionTime":"2026-01-09T13:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.555579 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.555658 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.555682 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.555716 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.555737 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:17Z","lastTransitionTime":"2026-01-09T13:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.660099 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.660182 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.660200 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.660264 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.660289 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:17Z","lastTransitionTime":"2026-01-09T13:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.751117 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.751123 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:17 crc kubenswrapper[4919]: E0109 13:32:17.751405 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.751136 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:17 crc kubenswrapper[4919]: E0109 13:32:17.751805 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:17 crc kubenswrapper[4919]: E0109 13:32:17.751699 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.764256 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.764320 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.764339 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.764368 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.764388 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:17Z","lastTransitionTime":"2026-01-09T13:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.867525 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.867616 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.867636 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.867668 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.867691 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:17Z","lastTransitionTime":"2026-01-09T13:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.971611 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.971724 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.971755 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.971783 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:17 crc kubenswrapper[4919]: I0109 13:32:17.971804 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:17Z","lastTransitionTime":"2026-01-09T13:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.075191 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.075298 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.075316 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.075429 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.075452 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:18Z","lastTransitionTime":"2026-01-09T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.178656 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.178743 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.178771 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.178806 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.178834 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:18Z","lastTransitionTime":"2026-01-09T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.282196 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.282311 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.282330 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.282357 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.282375 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:18Z","lastTransitionTime":"2026-01-09T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.385768 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.385836 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.385854 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.385881 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.385899 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:18Z","lastTransitionTime":"2026-01-09T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.489085 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.489164 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.489185 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.489253 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.489281 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:18Z","lastTransitionTime":"2026-01-09T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.593174 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.593262 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.593275 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.593296 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.593312 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:18Z","lastTransitionTime":"2026-01-09T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.609342 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgw8v_11e19b4a-0888-460f-bf97-5dd0ddda6e8c/kube-multus/1.log" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.609972 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgw8v_11e19b4a-0888-460f-bf97-5dd0ddda6e8c/kube-multus/0.log" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.610057 4919 generic.go:334] "Generic (PLEG): container finished" podID="11e19b4a-0888-460f-bf97-5dd0ddda6e8c" containerID="6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a" exitCode=1 Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.610118 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgw8v" event={"ID":"11e19b4a-0888-460f-bf97-5dd0ddda6e8c","Type":"ContainerDied","Data":"6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.610173 4919 scope.go:117] "RemoveContainer" containerID="3890ccf2af2b3f8c6648e7b36c716d7b9ef45d6821c5c7162c455641399184a6" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.610755 4919 scope.go:117] "RemoveContainer" containerID="6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a" Jan 09 13:32:18 crc kubenswrapper[4919]: E0109 13:32:18.610923 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-kgw8v_openshift-multus(11e19b4a-0888-460f-bf97-5dd0ddda6e8c)\"" pod="openshift-multus/multus-kgw8v" podUID="11e19b4a-0888-460f-bf97-5dd0ddda6e8c" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.669901 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.669870494 podStartE2EDuration="37.669870494s" podCreationTimestamp="2026-01-09 13:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:18.638709919 +0000 UTC m=+118.186549409" watchObservedRunningTime="2026-01-09 13:32:18.669870494 +0000 UTC m=+118.217709944" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.688633 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=94.688605589 podStartE2EDuration="1m34.688605589s" podCreationTimestamp="2026-01-09 13:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:18.669797023 +0000 UTC m=+118.217636503" watchObservedRunningTime="2026-01-09 13:32:18.688605589 +0000 UTC m=+118.236445049" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.696468 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.696511 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.696523 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.696540 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.696551 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:18Z","lastTransitionTime":"2026-01-09T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.712487 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=65.712454217 podStartE2EDuration="1m5.712454217s" podCreationTimestamp="2026-01-09 13:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:18.689286905 +0000 UTC m=+118.237126355" watchObservedRunningTime="2026-01-09 13:32:18.712454217 +0000 UTC m=+118.260293697" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.736138 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podStartSLOduration=93.73610402 podStartE2EDuration="1m33.73610402s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:18.730354171 +0000 UTC m=+118.278193621" watchObservedRunningTime="2026-01-09 13:32:18.73610402 +0000 UTC m=+118.283943510" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.750958 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:18 crc kubenswrapper[4919]: E0109 13:32:18.751297 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.800645 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.800723 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.800737 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.800756 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.800768 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:18Z","lastTransitionTime":"2026-01-09T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.811042 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=29.811015196 podStartE2EDuration="29.811015196s" podCreationTimestamp="2026-01-09 13:31:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:18.808520136 +0000 UTC m=+118.356359586" watchObservedRunningTime="2026-01-09 13:32:18.811015196 +0000 UTC m=+118.358854686" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.833595 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=92.833551142 podStartE2EDuration="1m32.833551142s" podCreationTimestamp="2026-01-09 13:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:18.832296362 +0000 UTC m=+118.380135832" watchObservedRunningTime="2026-01-09 13:32:18.833551142 +0000 UTC m=+118.381390642" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.903479 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.903552 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.903569 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.903594 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.903607 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:18Z","lastTransitionTime":"2026-01-09T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.929723 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9bzs4" podStartSLOduration=93.929687913 podStartE2EDuration="1m33.929687913s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:18.929475178 +0000 UTC m=+118.477314658" watchObservedRunningTime="2026-01-09 13:32:18.929687913 +0000 UTC m=+118.477527403" Jan 09 13:32:18 crc kubenswrapper[4919]: I0109 13:32:18.949055 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9s49l" podStartSLOduration=93.949021912 podStartE2EDuration="1m33.949021912s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:18.948637502 +0000 UTC m=+118.496477012" watchObservedRunningTime="2026-01-09 13:32:18.949021912 +0000 UTC m=+118.496861372" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.007380 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.007458 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.007480 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.007518 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.007542 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:19Z","lastTransitionTime":"2026-01-09T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.025017 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9z7cc" podStartSLOduration=94.024982253 podStartE2EDuration="1m34.024982253s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:19.023683882 +0000 UTC m=+118.571523372" watchObservedRunningTime="2026-01-09 13:32:19.024982253 +0000 UTC m=+118.572821743" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.110291 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.110347 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.110360 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.110388 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.110404 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:19Z","lastTransitionTime":"2026-01-09T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.213554 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.213612 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.213632 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.213656 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.213676 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:19Z","lastTransitionTime":"2026-01-09T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.317429 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.317506 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.317528 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.317557 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.317582 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:19Z","lastTransitionTime":"2026-01-09T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.421188 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.421337 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.421358 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.421389 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.421410 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:19Z","lastTransitionTime":"2026-01-09T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.524966 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.525038 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.525060 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.525093 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.525113 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:19Z","lastTransitionTime":"2026-01-09T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.617011 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgw8v_11e19b4a-0888-460f-bf97-5dd0ddda6e8c/kube-multus/1.log" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.627844 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.627936 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.627956 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.627988 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.628008 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:19Z","lastTransitionTime":"2026-01-09T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.732081 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.732171 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.732197 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.732268 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.732296 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:19Z","lastTransitionTime":"2026-01-09T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.750806 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.750871 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.750937 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:19 crc kubenswrapper[4919]: E0109 13:32:19.751056 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:19 crc kubenswrapper[4919]: E0109 13:32:19.751449 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:19 crc kubenswrapper[4919]: E0109 13:32:19.751605 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.752311 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.752370 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.752391 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.752418 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.752437 4919 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T13:32:19Z","lastTransitionTime":"2026-01-09T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.825480 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-97zdz" podStartSLOduration=94.825448479 podStartE2EDuration="1m34.825448479s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:19.045775087 +0000 UTC m=+118.593614567" watchObservedRunningTime="2026-01-09 13:32:19.825448479 +0000 UTC m=+119.373287959" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.825790 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt"] Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.826522 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.830147 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.830344 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.830364 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.833928 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.860005 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6133cc-3484-4475-a679-e486e0ed2621-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.860097 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a6133cc-3484-4475-a679-e486e0ed2621-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.860175 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0a6133cc-3484-4475-a679-e486e0ed2621-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.860515 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0a6133cc-3484-4475-a679-e486e0ed2621-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.860576 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a6133cc-3484-4475-a679-e486e0ed2621-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.961201 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6133cc-3484-4475-a679-e486e0ed2621-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.961337 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a6133cc-3484-4475-a679-e486e0ed2621-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.961402 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0a6133cc-3484-4475-a679-e486e0ed2621-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.961569 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0a6133cc-3484-4475-a679-e486e0ed2621-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.961617 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a6133cc-3484-4475-a679-e486e0ed2621-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.961680 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0a6133cc-3484-4475-a679-e486e0ed2621-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.961785 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0a6133cc-3484-4475-a679-e486e0ed2621-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.963133 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a6133cc-3484-4475-a679-e486e0ed2621-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.971094 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6133cc-3484-4475-a679-e486e0ed2621-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:19 crc kubenswrapper[4919]: I0109 13:32:19.995012 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a6133cc-3484-4475-a679-e486e0ed2621-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n29xt\" (UID: \"0a6133cc-3484-4475-a679-e486e0ed2621\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:20 crc kubenswrapper[4919]: I0109 13:32:20.150521 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" Jan 09 13:32:20 crc kubenswrapper[4919]: W0109 13:32:20.176315 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a6133cc_3484_4475_a679_e486e0ed2621.slice/crio-d21583c6876240683197b02dcb7e2d4cb29d0bda450be7cd2bcdf36835ade059 WatchSource:0}: Error finding container d21583c6876240683197b02dcb7e2d4cb29d0bda450be7cd2bcdf36835ade059: Status 404 returned error can't find the container with id d21583c6876240683197b02dcb7e2d4cb29d0bda450be7cd2bcdf36835ade059 Jan 09 13:32:20 crc kubenswrapper[4919]: I0109 13:32:20.624433 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" event={"ID":"0a6133cc-3484-4475-a679-e486e0ed2621","Type":"ContainerStarted","Data":"f8bfee88a9f6d486f59c78559cbc08b3761b84c243b04356ee0c29302dc07fbb"} Jan 09 13:32:20 crc kubenswrapper[4919]: I0109 13:32:20.626540 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" event={"ID":"0a6133cc-3484-4475-a679-e486e0ed2621","Type":"ContainerStarted","Data":"d21583c6876240683197b02dcb7e2d4cb29d0bda450be7cd2bcdf36835ade059"} Jan 09 13:32:20 crc kubenswrapper[4919]: I0109 13:32:20.649869 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n29xt" podStartSLOduration=95.649827543 podStartE2EDuration="1m35.649827543s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:20.648445039 +0000 UTC m=+120.196284529" watchObservedRunningTime="2026-01-09 13:32:20.649827543 +0000 UTC m=+120.197667033" Jan 09 13:32:20 crc kubenswrapper[4919]: I0109 13:32:20.751558 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:20 crc kubenswrapper[4919]: E0109 13:32:20.753512 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:20 crc kubenswrapper[4919]: E0109 13:32:20.769360 4919 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 09 13:32:20 crc kubenswrapper[4919]: E0109 13:32:20.862235 4919 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 13:32:21 crc kubenswrapper[4919]: I0109 13:32:21.751770 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:21 crc kubenswrapper[4919]: I0109 13:32:21.751843 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:21 crc kubenswrapper[4919]: I0109 13:32:21.752338 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:21 crc kubenswrapper[4919]: E0109 13:32:21.752528 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:21 crc kubenswrapper[4919]: E0109 13:32:21.752747 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:21 crc kubenswrapper[4919]: I0109 13:32:21.752822 4919 scope.go:117] "RemoveContainer" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:32:21 crc kubenswrapper[4919]: E0109 13:32:21.752941 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:21 crc kubenswrapper[4919]: E0109 13:32:21.753103 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w74hl_openshift-ovn-kubernetes(4a11a9b6-2419-4f04-b35e-ba296d70b705)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" Jan 09 13:32:22 crc kubenswrapper[4919]: I0109 13:32:22.751423 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:22 crc kubenswrapper[4919]: E0109 13:32:22.751647 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:23 crc kubenswrapper[4919]: I0109 13:32:23.750965 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:23 crc kubenswrapper[4919]: I0109 13:32:23.751076 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:23 crc kubenswrapper[4919]: I0109 13:32:23.751135 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:23 crc kubenswrapper[4919]: E0109 13:32:23.751193 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:23 crc kubenswrapper[4919]: E0109 13:32:23.751317 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:23 crc kubenswrapper[4919]: E0109 13:32:23.751419 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:24 crc kubenswrapper[4919]: I0109 13:32:24.751319 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:24 crc kubenswrapper[4919]: E0109 13:32:24.751525 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:25 crc kubenswrapper[4919]: I0109 13:32:25.751455 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:25 crc kubenswrapper[4919]: I0109 13:32:25.751511 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:25 crc kubenswrapper[4919]: I0109 13:32:25.751455 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:25 crc kubenswrapper[4919]: E0109 13:32:25.751725 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:25 crc kubenswrapper[4919]: E0109 13:32:25.751867 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:25 crc kubenswrapper[4919]: E0109 13:32:25.752274 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:25 crc kubenswrapper[4919]: E0109 13:32:25.864287 4919 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 13:32:26 crc kubenswrapper[4919]: I0109 13:32:26.751332 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:26 crc kubenswrapper[4919]: E0109 13:32:26.751549 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:27 crc kubenswrapper[4919]: I0109 13:32:27.751618 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:27 crc kubenswrapper[4919]: I0109 13:32:27.751689 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:27 crc kubenswrapper[4919]: I0109 13:32:27.751691 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:27 crc kubenswrapper[4919]: E0109 13:32:27.751806 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:27 crc kubenswrapper[4919]: E0109 13:32:27.751972 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:27 crc kubenswrapper[4919]: E0109 13:32:27.752115 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:28 crc kubenswrapper[4919]: I0109 13:32:28.750882 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:28 crc kubenswrapper[4919]: E0109 13:32:28.751104 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:29 crc kubenswrapper[4919]: I0109 13:32:29.750657 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:29 crc kubenswrapper[4919]: I0109 13:32:29.750712 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:29 crc kubenswrapper[4919]: I0109 13:32:29.750816 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:29 crc kubenswrapper[4919]: E0109 13:32:29.750860 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:29 crc kubenswrapper[4919]: E0109 13:32:29.751009 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:29 crc kubenswrapper[4919]: E0109 13:32:29.751304 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:29 crc kubenswrapper[4919]: I0109 13:32:29.751759 4919 scope.go:117] "RemoveContainer" containerID="6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a" Jan 09 13:32:30 crc kubenswrapper[4919]: I0109 13:32:30.668313 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgw8v_11e19b4a-0888-460f-bf97-5dd0ddda6e8c/kube-multus/1.log" Jan 09 13:32:30 crc kubenswrapper[4919]: I0109 13:32:30.668732 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgw8v" event={"ID":"11e19b4a-0888-460f-bf97-5dd0ddda6e8c","Type":"ContainerStarted","Data":"d5dedf26e5ff4665f09eceaa03a030632058e239d6a30d55b68dc35f2529731a"} Jan 09 13:32:30 crc kubenswrapper[4919]: I0109 13:32:30.696689 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kgw8v" podStartSLOduration=105.696656332 podStartE2EDuration="1m45.696656332s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:30.696472778 +0000 UTC m=+130.244312268" watchObservedRunningTime="2026-01-09 13:32:30.696656332 +0000 UTC m=+130.244495822" Jan 09 13:32:30 crc kubenswrapper[4919]: I0109 13:32:30.753628 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:30 crc kubenswrapper[4919]: E0109 13:32:30.753908 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:30 crc kubenswrapper[4919]: E0109 13:32:30.865443 4919 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 13:32:31 crc kubenswrapper[4919]: I0109 13:32:31.751504 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:31 crc kubenswrapper[4919]: E0109 13:32:31.751617 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:31 crc kubenswrapper[4919]: I0109 13:32:31.751669 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:31 crc kubenswrapper[4919]: E0109 13:32:31.751710 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:31 crc kubenswrapper[4919]: I0109 13:32:31.751742 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:31 crc kubenswrapper[4919]: E0109 13:32:31.751778 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:32 crc kubenswrapper[4919]: I0109 13:32:32.751325 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:32 crc kubenswrapper[4919]: E0109 13:32:32.752039 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:32 crc kubenswrapper[4919]: I0109 13:32:32.752461 4919 scope.go:117] "RemoveContainer" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:32:33 crc kubenswrapper[4919]: I0109 13:32:33.653638 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xkhdz"] Jan 09 13:32:33 crc kubenswrapper[4919]: I0109 13:32:33.653757 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:33 crc kubenswrapper[4919]: E0109 13:32:33.653870 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:33 crc kubenswrapper[4919]: I0109 13:32:33.681689 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/3.log" Jan 09 13:32:33 crc kubenswrapper[4919]: I0109 13:32:33.684273 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerStarted","Data":"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b"} Jan 09 13:32:33 crc kubenswrapper[4919]: I0109 13:32:33.684750 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:32:33 crc kubenswrapper[4919]: I0109 13:32:33.707272 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podStartSLOduration=108.707250287 podStartE2EDuration="1m48.707250287s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:33.707115694 +0000 UTC m=+133.254955154" watchObservedRunningTime="2026-01-09 13:32:33.707250287 +0000 UTC m=+133.255089737" Jan 09 13:32:33 crc kubenswrapper[4919]: I0109 13:32:33.751283 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:33 crc kubenswrapper[4919]: I0109 13:32:33.751319 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:33 crc kubenswrapper[4919]: E0109 13:32:33.751422 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:33 crc kubenswrapper[4919]: E0109 13:32:33.751519 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:34 crc kubenswrapper[4919]: I0109 13:32:34.781868 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:34 crc kubenswrapper[4919]: E0109 13:32:34.782001 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 13:32:35 crc kubenswrapper[4919]: I0109 13:32:35.751343 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:35 crc kubenswrapper[4919]: E0109 13:32:35.751840 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 13:32:35 crc kubenswrapper[4919]: I0109 13:32:35.751442 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:35 crc kubenswrapper[4919]: I0109 13:32:35.751393 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:35 crc kubenswrapper[4919]: E0109 13:32:35.752014 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xkhdz" podUID="7a2e9878-6b0e-4328-a3ca-9f828fb105c9" Jan 09 13:32:35 crc kubenswrapper[4919]: E0109 13:32:35.752108 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 13:32:36 crc kubenswrapper[4919]: I0109 13:32:36.751316 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:36 crc kubenswrapper[4919]: I0109 13:32:36.755004 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 09 13:32:36 crc kubenswrapper[4919]: I0109 13:32:36.755255 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 09 13:32:37 crc kubenswrapper[4919]: I0109 13:32:37.751402 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:32:37 crc kubenswrapper[4919]: I0109 13:32:37.751440 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:37 crc kubenswrapper[4919]: I0109 13:32:37.751448 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:37 crc kubenswrapper[4919]: I0109 13:32:37.755655 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 09 13:32:37 crc kubenswrapper[4919]: I0109 13:32:37.755687 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 09 13:32:37 crc kubenswrapper[4919]: I0109 13:32:37.755732 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 09 13:32:37 crc kubenswrapper[4919]: I0109 13:32:37.755751 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.469534 4919 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.516056 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7lrzs"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.516783 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ph5g6"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.517095 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.517791 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.520757 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.521247 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.525675 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.526170 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r8h48"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.526414 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.526865 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.536887 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.537164 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.542463 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.545925 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.588532 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.589275 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.589442 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.589636 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.589759 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.590057 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2tz5"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.590289 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.590546 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.590570 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.590712 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.590849 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.591035 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.591282 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.597594 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.598131 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.598652 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.598841 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.598994 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.599172 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.599499 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.599641 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.599782 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.599947 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.600084 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.600237 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.600685 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.600872 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.601013 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.601155 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.601300 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.601435 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.615698 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.615994 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.616163 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.619420 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.620332 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.620703 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-sjvr2"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.621591 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sjvr2" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.624490 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.625120 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.625337 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.625595 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.632112 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.632517 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.635118 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.635204 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttgps"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.635464 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.635619 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.635697 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.635961 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g2956"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.636063 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.649310 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.649672 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.649689 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.650027 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.650933 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.652456 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.651471 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.653531 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-audit\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.653568 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/856b0cf5-5731-4842-be1e-b25bb6426674-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vb6hf\" (UID: \"856b0cf5-5731-4842-be1e-b25bb6426674\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.653103 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.651949 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.651167 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654058 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-client-ca\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.661057 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5975f85-ddfb-4c96-bdc8-da5b3541a769-audit-dir\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.661184 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.661391 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/73189faa-e786-4c46-b23e-c9e58d6b0490-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.661598 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-etcd-client\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.661726 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-image-import-ca\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.661955 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-audit-dir\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.661994 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-config\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662016 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-config\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662034 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48075d37-56ec-4015-a38a-94068ad47148-serving-cert\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662052 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65sq5\" (UniqueName: \"kubernetes.io/projected/856b0cf5-5731-4842-be1e-b25bb6426674-kube-api-access-65sq5\") pod \"openshift-config-operator-7777fb866f-vb6hf\" (UID: \"856b0cf5-5731-4842-be1e-b25bb6426674\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662071 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-serving-cert\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.651166 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654152 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662456 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nskgv\" (UniqueName: \"kubernetes.io/projected/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-kube-api-access-nskgv\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662487 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e5975f85-ddfb-4c96-bdc8-da5b3541a769-etcd-client\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662513 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662541 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e5975f85-ddfb-4c96-bdc8-da5b3541a769-node-pullsecrets\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662557 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-etcd-serving-ca\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654227 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662751 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5975f85-ddfb-4c96-bdc8-da5b3541a769-serving-cert\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.651255 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662946 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-encryption-config\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.662998 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73189faa-e786-4c46-b23e-c9e58d6b0490-config\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.663016 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnc4\" (UniqueName: \"kubernetes.io/projected/e5975f85-ddfb-4c96-bdc8-da5b3541a769-kube-api-access-xjnc4\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.663032 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e5975f85-ddfb-4c96-bdc8-da5b3541a769-encryption-config\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.663046 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.663064 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx8x6\" (UniqueName: \"kubernetes.io/projected/48075d37-56ec-4015-a38a-94068ad47148-kube-api-access-mx8x6\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.663079 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73189faa-e786-4c46-b23e-c9e58d6b0490-images\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.663094 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/856b0cf5-5731-4842-be1e-b25bb6426674-serving-cert\") pod \"openshift-config-operator-7777fb866f-vb6hf\" (UID: \"856b0cf5-5731-4842-be1e-b25bb6426674\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.663110 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-audit-policies\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.651263 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.651305 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.651398 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.653336 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.653380 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.653654 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.663525 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654024 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.663551 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppzc\" (UniqueName: \"kubernetes.io/projected/73189faa-e786-4c46-b23e-c9e58d6b0490-kube-api-access-dppzc\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654066 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654089 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654276 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654354 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654416 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654514 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.654546 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.657105 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.658376 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.658558 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.659517 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.659580 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.660286 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.664836 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.664993 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.668131 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-46262"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.668315 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.672362 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.672816 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t7d9m"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.673004 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46262" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.673640 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.673705 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.674049 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ttlpm"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.674443 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.674685 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.675430 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.675633 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.675759 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.675948 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.677605 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.678692 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.678776 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.679248 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.681889 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66425"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.697909 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2ztc2"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.699602 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.699720 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.699906 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.700065 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.700332 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.700460 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.700597 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.700606 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.700774 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.700795 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.700931 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.701159 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.701302 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.701365 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.701464 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.701557 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.701643 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.701707 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.702706 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.702955 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.703357 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.703387 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.703503 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.703570 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.706100 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.712938 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.713945 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.714297 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.715428 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.715456 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.729242 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.730159 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.732107 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jx754"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.732798 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.732995 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.733187 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.735765 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.739457 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.740521 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.740867 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.741445 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.741746 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.742134 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.744770 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.746403 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cb8tr"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.747760 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.748170 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.748835 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.749293 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.749337 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.753422 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.754039 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764723 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e5975f85-ddfb-4c96-bdc8-da5b3541a769-node-pullsecrets\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764755 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-encryption-config\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764775 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-etcd-serving-ca\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764790 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5975f85-ddfb-4c96-bdc8-da5b3541a769-serving-cert\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764808 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73189faa-e786-4c46-b23e-c9e58d6b0490-config\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764845 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e5975f85-ddfb-4c96-bdc8-da5b3541a769-encryption-config\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764861 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnc4\" (UniqueName: \"kubernetes.io/projected/e5975f85-ddfb-4c96-bdc8-da5b3541a769-kube-api-access-xjnc4\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764879 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764895 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/856b0cf5-5731-4842-be1e-b25bb6426674-serving-cert\") pod \"openshift-config-operator-7777fb866f-vb6hf\" (UID: \"856b0cf5-5731-4842-be1e-b25bb6426674\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764912 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx8x6\" (UniqueName: \"kubernetes.io/projected/48075d37-56ec-4015-a38a-94068ad47148-kube-api-access-mx8x6\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764927 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73189faa-e786-4c46-b23e-c9e58d6b0490-images\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764949 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764963 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-audit-policies\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764979 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dppzc\" (UniqueName: \"kubernetes.io/projected/73189faa-e786-4c46-b23e-c9e58d6b0490-kube-api-access-dppzc\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.764996 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-audit\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765019 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/856b0cf5-5731-4842-be1e-b25bb6426674-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vb6hf\" (UID: \"856b0cf5-5731-4842-be1e-b25bb6426674\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765041 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-client-ca\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765057 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5975f85-ddfb-4c96-bdc8-da5b3541a769-audit-dir\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765071 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765088 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/73189faa-e786-4c46-b23e-c9e58d6b0490-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765106 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-etcd-client\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765131 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-image-import-ca\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765151 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-audit-dir\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765166 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-config\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765182 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-config\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765197 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48075d37-56ec-4015-a38a-94068ad47148-serving-cert\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765233 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65sq5\" (UniqueName: \"kubernetes.io/projected/856b0cf5-5731-4842-be1e-b25bb6426674-kube-api-access-65sq5\") pod \"openshift-config-operator-7777fb866f-vb6hf\" (UID: \"856b0cf5-5731-4842-be1e-b25bb6426674\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765250 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-serving-cert\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765267 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nskgv\" (UniqueName: \"kubernetes.io/projected/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-kube-api-access-nskgv\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765283 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e5975f85-ddfb-4c96-bdc8-da5b3541a769-etcd-client\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765310 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765876 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.765935 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e5975f85-ddfb-4c96-bdc8-da5b3541a769-node-pullsecrets\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.767393 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/856b0cf5-5731-4842-be1e-b25bb6426674-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vb6hf\" (UID: \"856b0cf5-5731-4842-be1e-b25bb6426674\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.768196 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-etcd-serving-ca\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.771005 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5975f85-ddfb-4c96-bdc8-da5b3541a769-audit-dir\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.771726 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.772314 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-audit-policies\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.772409 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73189faa-e786-4c46-b23e-c9e58d6b0490-images\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.770950 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73189faa-e786-4c46-b23e-c9e58d6b0490-config\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.773360 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.774055 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-audit\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.774328 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-audit-dir\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.774996 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-config\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.775194 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e5975f85-ddfb-4c96-bdc8-da5b3541a769-image-import-ca\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.775297 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-config\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.776254 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.783454 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.776786 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.778502 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-etcd-client\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.778632 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5975f85-ddfb-4c96-bdc8-da5b3541a769-serving-cert\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.778966 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e5975f85-ddfb-4c96-bdc8-da5b3541a769-encryption-config\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.779104 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.778542 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-client-ca\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.779669 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e5975f85-ddfb-4c96-bdc8-da5b3541a769-etcd-client\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.779808 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48075d37-56ec-4015-a38a-94068ad47148-serving-cert\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.780138 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-encryption-config\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.780433 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-serving-cert\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.781722 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/856b0cf5-5731-4842-be1e-b25bb6426674-serving-cert\") pod \"openshift-config-operator-7777fb866f-vb6hf\" (UID: \"856b0cf5-5731-4842-be1e-b25bb6426674\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.781863 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/73189faa-e786-4c46-b23e-c9e58d6b0490-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.802919 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6h2sh"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.803283 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.803523 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ph5g6"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.803546 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.803907 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bln95"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.804246 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7lrzs"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.804265 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-bffts"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.804330 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.804352 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.804841 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.804964 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.804987 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r8h48"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.804997 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sjvr2"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805007 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805018 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805027 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805038 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805048 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805057 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805066 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805076 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2tz5"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805092 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g2956"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805101 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-46262"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805109 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ttlpm"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805119 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cb8tr"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805131 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805142 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttgps"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805153 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805217 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805250 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805264 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-vplw6"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805817 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2ztc2"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805853 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805866 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.805927 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.807075 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bln95"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.808283 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.809332 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66425"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.810880 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.812137 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t7d9m"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.813282 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.814356 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.815416 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.816636 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.820243 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.821411 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bffts"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.822561 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.825454 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hnfjx"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.825881 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwk7t"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.826685 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.826882 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hnfjx" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.830034 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vplw6"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.832598 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6h2sh"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.835479 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hnfjx"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.835573 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwk7t"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.836916 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nlnnr"] Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.837583 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.842749 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.862331 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.902760 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.923075 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.948816 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.963636 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967532 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-certificates\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967568 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1969d897-8c50-47f9-90eb-4e9995d3b8d0-machine-approver-tls\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967588 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rj4\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-kube-api-access-h4rj4\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967620 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-policies\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967639 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkb25\" (UniqueName: \"kubernetes.io/projected/34f52914-d7ad-4273-870a-d1be6c03b766-kube-api-access-gkb25\") pod \"openshift-apiserver-operator-796bbdcf4f-l22qw\" (UID: \"34f52914-d7ad-4273-870a-d1be6c03b766\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967714 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967781 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-tls\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967835 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c544528-982d-44c6-bdb9-9fde7a83be80-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967864 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967887 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f52914-d7ad-4273-870a-d1be6c03b766-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l22qw\" (UID: \"34f52914-d7ad-4273-870a-d1be6c03b766\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.967966 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968006 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1969d897-8c50-47f9-90eb-4e9995d3b8d0-auth-proxy-config\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:40 crc kubenswrapper[4919]: E0109 13:32:40.968021 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.468009581 +0000 UTC m=+141.015849021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968109 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840f8ce4-e7b0-4def-b619-2a4252624256-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bhv5s\" (UID: \"840f8ce4-e7b0-4def-b619-2a4252624256\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968161 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6cm\" (UniqueName: \"kubernetes.io/projected/840f8ce4-e7b0-4def-b619-2a4252624256-kube-api-access-lt6cm\") pod \"openshift-controller-manager-operator-756b6f6bc6-bhv5s\" (UID: \"840f8ce4-e7b0-4def-b619-2a4252624256\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968248 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840f8ce4-e7b0-4def-b619-2a4252624256-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bhv5s\" (UID: \"840f8ce4-e7b0-4def-b619-2a4252624256\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968282 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gdpn\" (UniqueName: \"kubernetes.io/projected/1969d897-8c50-47f9-90eb-4e9995d3b8d0-kube-api-access-2gdpn\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968460 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968538 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1969d897-8c50-47f9-90eb-4e9995d3b8d0-config\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968591 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968630 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw5wl\" (UniqueName: \"kubernetes.io/projected/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-kube-api-access-gw5wl\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968665 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-proxy-tls\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968682 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68296f23-034f-4dfd-bb8d-879beafa7ad0-config\") pod \"kube-apiserver-operator-766d6c64bb-nmjx4\" (UID: \"68296f23-034f-4dfd-bb8d-879beafa7ad0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968698 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrhcv\" (UniqueName: \"kubernetes.io/projected/077f68de-b2f6-4bbb-8702-81523f9dc7ab-kube-api-access-mrhcv\") pod \"downloads-7954f5f757-sjvr2\" (UID: \"077f68de-b2f6-4bbb-8702-81523f9dc7ab\") " pod="openshift-console/downloads-7954f5f757-sjvr2" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968716 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968748 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-dir\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968770 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968786 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968838 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968856 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968876 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c544528-982d-44c6-bdb9-9fde7a83be80-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968892 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68296f23-034f-4dfd-bb8d-879beafa7ad0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nmjx4\" (UID: \"68296f23-034f-4dfd-bb8d-879beafa7ad0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968908 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968927 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7b28\" (UniqueName: \"kubernetes.io/projected/6c544528-982d-44c6-bdb9-9fde7a83be80-kube-api-access-g7b28\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968944 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt4tk\" (UniqueName: \"kubernetes.io/projected/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-kube-api-access-bt4tk\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968970 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c544528-982d-44c6-bdb9-9fde7a83be80-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.968987 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.969005 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-images\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.969026 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.969046 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.969105 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-trusted-ca\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.969125 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34f52914-d7ad-4273-870a-d1be6c03b766-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l22qw\" (UID: \"34f52914-d7ad-4273-870a-d1be6c03b766\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.969190 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.969241 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-bound-sa-token\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.969273 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68296f23-034f-4dfd-bb8d-879beafa7ad0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nmjx4\" (UID: \"68296f23-034f-4dfd-bb8d-879beafa7ad0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:40 crc kubenswrapper[4919]: I0109 13:32:40.983030 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.002451 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.022154 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.043981 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.063383 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.070655 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.070825 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.570799036 +0000 UTC m=+141.118638486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.070870 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/607f4472-6658-48ef-ba52-4b6b097eaa2e-secret-volume\") pod \"collect-profiles-29466090-q6cvw\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.070907 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22jl\" (UniqueName: \"kubernetes.io/projected/759c9cb1-8b38-429f-84a1-6a1c02619cf7-kube-api-access-f22jl\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.070936 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.070956 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9739473-727c-4d34-8083-7a5bccb26be6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2tpnt\" (UID: \"d9739473-727c-4d34-8083-7a5bccb26be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.070981 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4x4q\" (UniqueName: \"kubernetes.io/projected/306b1c8e-d6e2-45e3-8444-5150e5a7d346-kube-api-access-b4x4q\") pod \"multus-admission-controller-857f4d67dd-6h2sh\" (UID: \"306b1c8e-d6e2-45e3-8444-5150e5a7d346\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071013 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e82214-7a8c-4501-afc0-1f7e9d090bcb-config\") pod \"kube-controller-manager-operator-78b949d7b-dpvdp\" (UID: \"69e82214-7a8c-4501-afc0-1f7e9d090bcb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071037 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2797691b-7fdf-450b-a02f-429298cf2a70-srv-cert\") pod \"catalog-operator-68c6474976-6s6h2\" (UID: \"2797691b-7fdf-450b-a02f-429298cf2a70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071060 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b526dd5-1496-4542-aecb-c908662ef696-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4mjm\" (UID: \"7b526dd5-1496-4542-aecb-c908662ef696\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071085 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58d19460-8b4f-467d-9bc8-f591dd79992c-serving-cert\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071117 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bvz7\" (UniqueName: \"kubernetes.io/projected/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-kube-api-access-2bvz7\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071141 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-oauth-serving-cert\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071168 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ea35295-83c1-498b-b190-7dad56fe323b-service-ca-bundle\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071191 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-config\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071252 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840f8ce4-e7b0-4def-b619-2a4252624256-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bhv5s\" (UID: \"840f8ce4-e7b0-4def-b619-2a4252624256\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071273 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0ea35295-83c1-498b-b190-7dad56fe323b-default-certificate\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071292 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gdpn\" (UniqueName: \"kubernetes.io/projected/1969d897-8c50-47f9-90eb-4e9995d3b8d0-kube-api-access-2gdpn\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071307 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1969d897-8c50-47f9-90eb-4e9995d3b8d0-config\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071356 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-service-ca\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071372 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d19460-8b4f-467d-9bc8-f591dd79992c-config\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.071424 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.571411351 +0000 UTC m=+141.119250801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071390 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrhcv\" (UniqueName: \"kubernetes.io/projected/077f68de-b2f6-4bbb-8702-81523f9dc7ab-kube-api-access-mrhcv\") pod \"downloads-7954f5f757-sjvr2\" (UID: \"077f68de-b2f6-4bbb-8702-81523f9dc7ab\") " pod="openshift-console/downloads-7954f5f757-sjvr2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071466 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071484 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071525 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c3f993d-59c9-444b-9882-cedb07c01c7a-serving-cert\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071549 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-dir\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071609 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-dir\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071663 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69e82214-7a8c-4501-afc0-1f7e9d090bcb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dpvdp\" (UID: \"69e82214-7a8c-4501-afc0-1f7e9d090bcb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071685 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-serving-cert\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071709 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071726 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071743 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd6l4\" (UniqueName: \"kubernetes.io/projected/3f44210f-5f93-426b-852f-1fc6f0e4deb7-kube-api-access-kd6l4\") pod \"olm-operator-6b444d44fb-dhxrb\" (UID: \"3f44210f-5f93-426b-852f-1fc6f0e4deb7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071785 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfqg9\" (UniqueName: \"kubernetes.io/projected/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-kube-api-access-cfqg9\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071804 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68296f23-034f-4dfd-bb8d-879beafa7ad0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nmjx4\" (UID: \"68296f23-034f-4dfd-bb8d-879beafa7ad0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071834 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/54c29b9f-4240-4edd-98aa-cd053a66000e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mv7fj\" (UID: \"54c29b9f-4240-4edd-98aa-cd053a66000e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071852 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm9v9\" (UniqueName: \"kubernetes.io/projected/8446e162-cf3d-4afd-8dfc-92b5b6d66d64-kube-api-access-tm9v9\") pod \"service-ca-9c57cc56f-cb8tr\" (UID: \"8446e162-cf3d-4afd-8dfc-92b5b6d66d64\") " pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071873 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt4tk\" (UniqueName: \"kubernetes.io/projected/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-kube-api-access-bt4tk\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071933 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c544528-982d-44c6-bdb9-9fde7a83be80-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071950 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9f7cb04a-b39d-4777-b8d5-8c0741134433-tmpfs\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.071955 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840f8ce4-e7b0-4def-b619-2a4252624256-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bhv5s\" (UID: \"840f8ce4-e7b0-4def-b619-2a4252624256\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072011 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p5xc\" (UniqueName: \"kubernetes.io/projected/7b526dd5-1496-4542-aecb-c908662ef696-kube-api-access-2p5xc\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4mjm\" (UID: \"7b526dd5-1496-4542-aecb-c908662ef696\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072038 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-images\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072082 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-66425\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072099 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/607f4472-6658-48ef-ba52-4b6b097eaa2e-config-volume\") pod \"collect-profiles-29466090-q6cvw\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072154 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-config\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072177 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0ea35295-83c1-498b-b190-7dad56fe323b-stats-auth\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072199 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1969d897-8c50-47f9-90eb-4e9995d3b8d0-config\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072263 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8446e162-cf3d-4afd-8dfc-92b5b6d66d64-signing-cabundle\") pod \"service-ca-9c57cc56f-cb8tr\" (UID: \"8446e162-cf3d-4afd-8dfc-92b5b6d66d64\") " pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072323 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b27b30e-8a1e-4c12-ad5a-530c640bf23d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-twpss\" (UID: \"5b27b30e-8a1e-4c12-ad5a-530c640bf23d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072349 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rhq\" (UniqueName: \"kubernetes.io/projected/9f7cb04a-b39d-4777-b8d5-8c0741134433-kube-api-access-n7rhq\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072373 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82674f62-f752-4e34-85e4-fc0678f6aca9-config-volume\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072507 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzhcb\" (UniqueName: \"kubernetes.io/projected/0ea35295-83c1-498b-b190-7dad56fe323b-kube-api-access-rzhcb\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072534 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-plugins-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072556 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-certificates\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072575 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1969d897-8c50-47f9-90eb-4e9995d3b8d0-machine-approver-tls\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072603 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-etcd-ca\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072624 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba7db551-cd6a-4d50-98a5-2d532f893e7a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9gc57\" (UID: \"ba7db551-cd6a-4d50-98a5-2d532f893e7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072640 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-66425\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072656 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77wn\" (UniqueName: \"kubernetes.io/projected/b85eabc9-9f0c-45f7-941f-e329f3022b74-kube-api-access-j77wn\") pod \"dns-operator-744455d44c-2ztc2\" (UID: \"b85eabc9-9f0c-45f7-941f-e329f3022b74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072678 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkb25\" (UniqueName: \"kubernetes.io/projected/34f52914-d7ad-4273-870a-d1be6c03b766-kube-api-access-gkb25\") pod \"openshift-apiserver-operator-796bbdcf4f-l22qw\" (UID: \"34f52914-d7ad-4273-870a-d1be6c03b766\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072688 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-images\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072694 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjkr\" (UniqueName: \"kubernetes.io/projected/ba7db551-cd6a-4d50-98a5-2d532f893e7a-kube-api-access-dzjkr\") pod \"package-server-manager-789f6589d5-9gc57\" (UID: \"ba7db551-cd6a-4d50-98a5-2d532f893e7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072748 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2797691b-7fdf-450b-a02f-429298cf2a70-profile-collector-cert\") pod \"catalog-operator-68c6474976-6s6h2\" (UID: \"2797691b-7fdf-450b-a02f-429298cf2a70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072768 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h7gm\" (UniqueName: \"kubernetes.io/projected/73f4afd2-691f-4749-b361-d99c9482a35b-kube-api-access-4h7gm\") pod \"marketplace-operator-79b997595-66425\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072786 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69e82214-7a8c-4501-afc0-1f7e9d090bcb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dpvdp\" (UID: \"69e82214-7a8c-4501-afc0-1f7e9d090bcb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072818 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-registration-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072836 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-tls\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072858 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f52914-d7ad-4273-870a-d1be6c03b766-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l22qw\" (UID: \"34f52914-d7ad-4273-870a-d1be6c03b766\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072879 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56p9\" (UniqueName: \"kubernetes.io/projected/8f61ba90-6fa6-4eb4-a496-d05c70940365-kube-api-access-d56p9\") pod \"machine-config-controller-84d6567774-hqds7\" (UID: \"8f61ba90-6fa6-4eb4-a496-d05c70940365\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072896 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c544528-982d-44c6-bdb9-9fde7a83be80-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072912 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072971 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-service-ca-bundle\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.072998 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073053 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9739473-727c-4d34-8083-7a5bccb26be6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2tpnt\" (UID: \"d9739473-727c-4d34-8083-7a5bccb26be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073073 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-serving-cert\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073108 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073128 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1969d897-8c50-47f9-90eb-4e9995d3b8d0-auth-proxy-config\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073153 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxfsx\" (UniqueName: \"kubernetes.io/projected/54c29b9f-4240-4edd-98aa-cd053a66000e-kube-api-access-wxfsx\") pod \"cluster-samples-operator-665b6dd947-mv7fj\" (UID: \"54c29b9f-4240-4edd-98aa-cd053a66000e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073170 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-client-ca\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073361 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58d19460-8b4f-467d-9bc8-f591dd79992c-trusted-ca\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073554 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-config\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073613 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840f8ce4-e7b0-4def-b619-2a4252624256-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bhv5s\" (UID: \"840f8ce4-e7b0-4def-b619-2a4252624256\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073638 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073661 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6cm\" (UniqueName: \"kubernetes.io/projected/840f8ce4-e7b0-4def-b619-2a4252624256-kube-api-access-lt6cm\") pod \"openshift-controller-manager-operator-756b6f6bc6-bhv5s\" (UID: \"840f8ce4-e7b0-4def-b619-2a4252624256\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073737 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3f44210f-5f93-426b-852f-1fc6f0e4deb7-srv-cert\") pod \"olm-operator-6b444d44fb-dhxrb\" (UID: \"3f44210f-5f93-426b-852f-1fc6f0e4deb7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073759 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f52914-d7ad-4273-870a-d1be6c03b766-config\") pod \"openshift-apiserver-operator-796bbdcf4f-l22qw\" (UID: \"34f52914-d7ad-4273-870a-d1be6c03b766\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073821 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b85eabc9-9f0c-45f7-941f-e329f3022b74-metrics-tls\") pod \"dns-operator-744455d44c-2ztc2\" (UID: \"b85eabc9-9f0c-45f7-941f-e329f3022b74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073877 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbblk\" (UniqueName: \"kubernetes.io/projected/5b27b30e-8a1e-4c12-ad5a-530c640bf23d-kube-api-access-xbblk\") pod \"control-plane-machine-set-operator-78cbb6b69f-twpss\" (UID: \"5b27b30e-8a1e-4c12-ad5a-530c640bf23d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073916 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b526dd5-1496-4542-aecb-c908662ef696-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4mjm\" (UID: \"7b526dd5-1496-4542-aecb-c908662ef696\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073964 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073976 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1969d897-8c50-47f9-90eb-4e9995d3b8d0-auth-proxy-config\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.073994 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw5wl\" (UniqueName: \"kubernetes.io/projected/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-kube-api-access-gw5wl\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074022 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074048 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ffc58cf-c5fb-450f-abeb-7fd513919fde-serving-cert\") pod \"service-ca-operator-777779d784-5ntgp\" (UID: \"5ffc58cf-c5fb-450f-abeb-7fd513919fde\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074077 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-proxy-tls\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074102 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68296f23-034f-4dfd-bb8d-879beafa7ad0-config\") pod \"kube-apiserver-operator-766d6c64bb-nmjx4\" (UID: \"68296f23-034f-4dfd-bb8d-879beafa7ad0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074131 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-node-bootstrap-token\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074160 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f7cb04a-b39d-4777-b8d5-8c0741134433-apiservice-cert\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074185 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/759c9cb1-8b38-429f-84a1-6a1c02619cf7-serving-cert\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074227 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/82674f62-f752-4e34-85e4-fc0678f6aca9-metrics-tls\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074266 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8446e162-cf3d-4afd-8dfc-92b5b6d66d64-signing-key\") pod \"service-ca-9c57cc56f-cb8tr\" (UID: \"8446e162-cf3d-4afd-8dfc-92b5b6d66d64\") " pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074286 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074303 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-socket-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074320 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c544528-982d-44c6-bdb9-9fde7a83be80-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074346 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074362 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpxgb\" (UniqueName: \"kubernetes.io/projected/8c3f993d-59c9-444b-9882-cedb07c01c7a-kube-api-access-vpxgb\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074380 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074396 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7b28\" (UniqueName: \"kubernetes.io/projected/6c544528-982d-44c6-bdb9-9fde7a83be80-kube-api-access-g7b28\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074414 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-console-config\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074445 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-csi-data-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074462 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074479 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd2k9\" (UniqueName: \"kubernetes.io/projected/58d19460-8b4f-467d-9bc8-f591dd79992c-kube-api-access-xd2k9\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074497 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ffc58cf-c5fb-450f-abeb-7fd513919fde-config\") pod \"service-ca-operator-777779d784-5ntgp\" (UID: \"5ffc58cf-c5fb-450f-abeb-7fd513919fde\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074513 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7thdl\" (UniqueName: \"kubernetes.io/projected/58013dad-1347-4da5-8314-495388d1b5c2-kube-api-access-7thdl\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074536 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074553 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f7cb04a-b39d-4777-b8d5-8c0741134433-webhook-cert\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074590 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074609 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3f44210f-5f93-426b-852f-1fc6f0e4deb7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dhxrb\" (UID: \"3f44210f-5f93-426b-852f-1fc6f0e4deb7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074627 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfkv9\" (UniqueName: \"kubernetes.io/projected/82674f62-f752-4e34-85e4-fc0678f6aca9-kube-api-access-dfkv9\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074646 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9739473-727c-4d34-8083-7a5bccb26be6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2tpnt\" (UID: \"d9739473-727c-4d34-8083-7a5bccb26be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074667 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-trusted-ca\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074684 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34f52914-d7ad-4273-870a-d1be6c03b766-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l22qw\" (UID: \"34f52914-d7ad-4273-870a-d1be6c03b766\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074712 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbknj\" (UniqueName: \"kubernetes.io/projected/607f4472-6658-48ef-ba52-4b6b097eaa2e-kube-api-access-xbknj\") pod \"collect-profiles-29466090-q6cvw\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074728 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-metrics-tls\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074746 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbdxc\" (UniqueName: \"kubernetes.io/projected/747dd3ff-596b-48dd-a419-43c73dad5bfb-kube-api-access-nbdxc\") pod \"ingress-canary-hnfjx\" (UID: \"747dd3ff-596b-48dd-a419-43c73dad5bfb\") " pod="openshift-ingress-canary/ingress-canary-hnfjx" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074765 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074781 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ea35295-83c1-498b-b190-7dad56fe323b-metrics-certs\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074796 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68296f23-034f-4dfd-bb8d-879beafa7ad0-config\") pod \"kube-apiserver-operator-766d6c64bb-nmjx4\" (UID: \"68296f23-034f-4dfd-bb8d-879beafa7ad0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074802 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-bound-sa-token\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074861 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68296f23-034f-4dfd-bb8d-879beafa7ad0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nmjx4\" (UID: \"68296f23-034f-4dfd-bb8d-879beafa7ad0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074894 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kmlt\" (UniqueName: \"kubernetes.io/projected/a42bf9ad-8478-4f7b-93aa-623be932ba47-kube-api-access-5kmlt\") pod \"migrator-59844c95c7-46262\" (UID: \"a42bf9ad-8478-4f7b-93aa-623be932ba47\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46262" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074919 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075202 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075264 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.074922 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-etcd-service-ca\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075696 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-trusted-ca-bundle\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075725 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-trusted-ca\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075771 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-certs\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075819 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f61ba90-6fa6-4eb4-a496-d05c70940365-proxy-tls\") pod \"machine-config-controller-84d6567774-hqds7\" (UID: \"8f61ba90-6fa6-4eb4-a496-d05c70940365\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075867 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t68m\" (UniqueName: \"kubernetes.io/projected/5ffc58cf-c5fb-450f-abeb-7fd513919fde-kube-api-access-9t68m\") pod \"service-ca-operator-777779d784-5ntgp\" (UID: \"5ffc58cf-c5fb-450f-abeb-7fd513919fde\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075897 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vlcl\" (UniqueName: \"kubernetes.io/projected/50176685-fd3c-4f19-8a96-ebc4957ca412-kube-api-access-8vlcl\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075921 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/747dd3ff-596b-48dd-a419-43c73dad5bfb-cert\") pod \"ingress-canary-hnfjx\" (UID: \"747dd3ff-596b-48dd-a419-43c73dad5bfb\") " pod="openshift-ingress-canary/ingress-canary-hnfjx" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075948 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9g7l\" (UniqueName: \"kubernetes.io/projected/2797691b-7fdf-450b-a02f-429298cf2a70-kube-api-access-z9g7l\") pod \"catalog-operator-68c6474976-6s6h2\" (UID: \"2797691b-7fdf-450b-a02f-429298cf2a70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075970 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-oauth-config\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.075995 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq6vc\" (UniqueName: \"kubernetes.io/projected/1ab58f80-d33a-4525-8c70-916d566b2521-kube-api-access-fq6vc\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.076019 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f61ba90-6fa6-4eb4-a496-d05c70940365-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hqds7\" (UID: \"8f61ba90-6fa6-4eb4-a496-d05c70940365\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.076092 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-etcd-client\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.076151 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rj4\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-kube-api-access-h4rj4\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.076263 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-policies\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.076324 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.076359 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/306b1c8e-d6e2-45e3-8444-5150e5a7d346-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6h2sh\" (UID: \"306b1c8e-d6e2-45e3-8444-5150e5a7d346\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.076393 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-mountpoint-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.076466 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.076977 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-trusted-ca\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.077064 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-policies\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.077732 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-certificates\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.078278 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c544528-982d-44c6-bdb9-9fde7a83be80-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.078822 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1969d897-8c50-47f9-90eb-4e9995d3b8d0-machine-approver-tls\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.079530 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.079605 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-proxy-tls\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.079770 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.079959 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.080793 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.080877 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840f8ce4-e7b0-4def-b619-2a4252624256-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bhv5s\" (UID: \"840f8ce4-e7b0-4def-b619-2a4252624256\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.080902 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.081272 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68296f23-034f-4dfd-bb8d-879beafa7ad0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nmjx4\" (UID: \"68296f23-034f-4dfd-bb8d-879beafa7ad0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.081421 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.081867 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c544528-982d-44c6-bdb9-9fde7a83be80-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.082189 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-tls\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.082314 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.082805 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.085582 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34f52914-d7ad-4273-870a-d1be6c03b766-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-l22qw\" (UID: \"34f52914-d7ad-4273-870a-d1be6c03b766\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.086511 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.103316 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.123567 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.143303 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.162763 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.179430 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.179814 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.6797802 +0000 UTC m=+141.227619650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.179886 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t68m\" (UniqueName: \"kubernetes.io/projected/5ffc58cf-c5fb-450f-abeb-7fd513919fde-kube-api-access-9t68m\") pod \"service-ca-operator-777779d784-5ntgp\" (UID: \"5ffc58cf-c5fb-450f-abeb-7fd513919fde\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.179927 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vlcl\" (UniqueName: \"kubernetes.io/projected/50176685-fd3c-4f19-8a96-ebc4957ca412-kube-api-access-8vlcl\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180052 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f61ba90-6fa6-4eb4-a496-d05c70940365-proxy-tls\") pod \"machine-config-controller-84d6567774-hqds7\" (UID: \"8f61ba90-6fa6-4eb4-a496-d05c70940365\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180088 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9g7l\" (UniqueName: \"kubernetes.io/projected/2797691b-7fdf-450b-a02f-429298cf2a70-kube-api-access-z9g7l\") pod \"catalog-operator-68c6474976-6s6h2\" (UID: \"2797691b-7fdf-450b-a02f-429298cf2a70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180111 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-oauth-config\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180132 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/747dd3ff-596b-48dd-a419-43c73dad5bfb-cert\") pod \"ingress-canary-hnfjx\" (UID: \"747dd3ff-596b-48dd-a419-43c73dad5bfb\") " pod="openshift-ingress-canary/ingress-canary-hnfjx" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180151 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-etcd-client\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180244 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq6vc\" (UniqueName: \"kubernetes.io/projected/1ab58f80-d33a-4525-8c70-916d566b2521-kube-api-access-fq6vc\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180263 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f61ba90-6fa6-4eb4-a496-d05c70940365-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hqds7\" (UID: \"8f61ba90-6fa6-4eb4-a496-d05c70940365\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180296 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180328 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/306b1c8e-d6e2-45e3-8444-5150e5a7d346-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6h2sh\" (UID: \"306b1c8e-d6e2-45e3-8444-5150e5a7d346\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180350 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-mountpoint-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180368 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/607f4472-6658-48ef-ba52-4b6b097eaa2e-secret-volume\") pod \"collect-profiles-29466090-q6cvw\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180395 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180418 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9739473-727c-4d34-8083-7a5bccb26be6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2tpnt\" (UID: \"d9739473-727c-4d34-8083-7a5bccb26be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180463 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22jl\" (UniqueName: \"kubernetes.io/projected/759c9cb1-8b38-429f-84a1-6a1c02619cf7-kube-api-access-f22jl\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180473 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-mountpoint-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180488 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4x4q\" (UniqueName: \"kubernetes.io/projected/306b1c8e-d6e2-45e3-8444-5150e5a7d346-kube-api-access-b4x4q\") pod \"multus-admission-controller-857f4d67dd-6h2sh\" (UID: \"306b1c8e-d6e2-45e3-8444-5150e5a7d346\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180626 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e82214-7a8c-4501-afc0-1f7e9d090bcb-config\") pod \"kube-controller-manager-operator-78b949d7b-dpvdp\" (UID: \"69e82214-7a8c-4501-afc0-1f7e9d090bcb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180657 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b526dd5-1496-4542-aecb-c908662ef696-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4mjm\" (UID: \"7b526dd5-1496-4542-aecb-c908662ef696\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180683 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58d19460-8b4f-467d-9bc8-f591dd79992c-serving-cert\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180708 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2797691b-7fdf-450b-a02f-429298cf2a70-srv-cert\") pod \"catalog-operator-68c6474976-6s6h2\" (UID: \"2797691b-7fdf-450b-a02f-429298cf2a70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180731 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bvz7\" (UniqueName: \"kubernetes.io/projected/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-kube-api-access-2bvz7\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.180759 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.680743034 +0000 UTC m=+141.228582484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180803 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-oauth-serving-cert\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180843 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ea35295-83c1-498b-b190-7dad56fe323b-service-ca-bundle\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180863 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-config\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180908 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0ea35295-83c1-498b-b190-7dad56fe323b-default-certificate\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180927 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-service-ca\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180953 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180974 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d19460-8b4f-467d-9bc8-f591dd79992c-config\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.180992 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c3f993d-59c9-444b-9882-cedb07c01c7a-serving-cert\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181015 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69e82214-7a8c-4501-afc0-1f7e9d090bcb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dpvdp\" (UID: \"69e82214-7a8c-4501-afc0-1f7e9d090bcb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181043 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-serving-cert\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181068 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd6l4\" (UniqueName: \"kubernetes.io/projected/3f44210f-5f93-426b-852f-1fc6f0e4deb7-kube-api-access-kd6l4\") pod \"olm-operator-6b444d44fb-dhxrb\" (UID: \"3f44210f-5f93-426b-852f-1fc6f0e4deb7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181307 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfqg9\" (UniqueName: \"kubernetes.io/projected/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-kube-api-access-cfqg9\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181313 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f61ba90-6fa6-4eb4-a496-d05c70940365-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hqds7\" (UID: \"8f61ba90-6fa6-4eb4-a496-d05c70940365\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181328 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm9v9\" (UniqueName: \"kubernetes.io/projected/8446e162-cf3d-4afd-8dfc-92b5b6d66d64-kube-api-access-tm9v9\") pod \"service-ca-9c57cc56f-cb8tr\" (UID: \"8446e162-cf3d-4afd-8dfc-92b5b6d66d64\") " pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181395 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/54c29b9f-4240-4edd-98aa-cd053a66000e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mv7fj\" (UID: \"54c29b9f-4240-4edd-98aa-cd053a66000e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181440 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9f7cb04a-b39d-4777-b8d5-8c0741134433-tmpfs\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181465 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p5xc\" (UniqueName: \"kubernetes.io/projected/7b526dd5-1496-4542-aecb-c908662ef696-kube-api-access-2p5xc\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4mjm\" (UID: \"7b526dd5-1496-4542-aecb-c908662ef696\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181489 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-66425\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181509 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/607f4472-6658-48ef-ba52-4b6b097eaa2e-config-volume\") pod \"collect-profiles-29466090-q6cvw\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181532 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-config\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181555 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b27b30e-8a1e-4c12-ad5a-530c640bf23d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-twpss\" (UID: \"5b27b30e-8a1e-4c12-ad5a-530c640bf23d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181582 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0ea35295-83c1-498b-b190-7dad56fe323b-stats-auth\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181606 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8446e162-cf3d-4afd-8dfc-92b5b6d66d64-signing-cabundle\") pod \"service-ca-9c57cc56f-cb8tr\" (UID: \"8446e162-cf3d-4afd-8dfc-92b5b6d66d64\") " pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181631 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7rhq\" (UniqueName: \"kubernetes.io/projected/9f7cb04a-b39d-4777-b8d5-8c0741134433-kube-api-access-n7rhq\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181653 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82674f62-f752-4e34-85e4-fc0678f6aca9-config-volume\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181676 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzhcb\" (UniqueName: \"kubernetes.io/projected/0ea35295-83c1-498b-b190-7dad56fe323b-kube-api-access-rzhcb\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181697 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-plugins-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181729 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba7db551-cd6a-4d50-98a5-2d532f893e7a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9gc57\" (UID: \"ba7db551-cd6a-4d50-98a5-2d532f893e7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181750 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-etcd-ca\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181782 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-66425\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181803 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j77wn\" (UniqueName: \"kubernetes.io/projected/b85eabc9-9f0c-45f7-941f-e329f3022b74-kube-api-access-j77wn\") pod \"dns-operator-744455d44c-2ztc2\" (UID: \"b85eabc9-9f0c-45f7-941f-e329f3022b74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181826 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2797691b-7fdf-450b-a02f-429298cf2a70-profile-collector-cert\") pod \"catalog-operator-68c6474976-6s6h2\" (UID: \"2797691b-7fdf-450b-a02f-429298cf2a70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181844 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9f7cb04a-b39d-4777-b8d5-8c0741134433-tmpfs\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181851 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h7gm\" (UniqueName: \"kubernetes.io/projected/73f4afd2-691f-4749-b361-d99c9482a35b-kube-api-access-4h7gm\") pod \"marketplace-operator-79b997595-66425\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181946 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69e82214-7a8c-4501-afc0-1f7e9d090bcb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dpvdp\" (UID: \"69e82214-7a8c-4501-afc0-1f7e9d090bcb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181978 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzjkr\" (UniqueName: \"kubernetes.io/projected/ba7db551-cd6a-4d50-98a5-2d532f893e7a-kube-api-access-dzjkr\") pod \"package-server-manager-789f6589d5-9gc57\" (UID: \"ba7db551-cd6a-4d50-98a5-2d532f893e7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.181996 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-registration-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182024 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56p9\" (UniqueName: \"kubernetes.io/projected/8f61ba90-6fa6-4eb4-a496-d05c70940365-kube-api-access-d56p9\") pod \"machine-config-controller-84d6567774-hqds7\" (UID: \"8f61ba90-6fa6-4eb4-a496-d05c70940365\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182043 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-service-ca-bundle\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182064 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-serving-cert\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182087 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9739473-727c-4d34-8083-7a5bccb26be6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2tpnt\" (UID: \"d9739473-727c-4d34-8083-7a5bccb26be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182107 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxfsx\" (UniqueName: \"kubernetes.io/projected/54c29b9f-4240-4edd-98aa-cd053a66000e-kube-api-access-wxfsx\") pod \"cluster-samples-operator-665b6dd947-mv7fj\" (UID: \"54c29b9f-4240-4edd-98aa-cd053a66000e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182124 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-client-ca\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182153 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58d19460-8b4f-467d-9bc8-f591dd79992c-trusted-ca\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182179 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3f44210f-5f93-426b-852f-1fc6f0e4deb7-srv-cert\") pod \"olm-operator-6b444d44fb-dhxrb\" (UID: \"3f44210f-5f93-426b-852f-1fc6f0e4deb7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182339 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-config\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182355 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d19460-8b4f-467d-9bc8-f591dd79992c-config\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182372 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-plugins-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182447 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-registration-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182361 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b85eabc9-9f0c-45f7-941f-e329f3022b74-metrics-tls\") pod \"dns-operator-744455d44c-2ztc2\" (UID: \"b85eabc9-9f0c-45f7-941f-e329f3022b74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182579 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbblk\" (UniqueName: \"kubernetes.io/projected/5b27b30e-8a1e-4c12-ad5a-530c640bf23d-kube-api-access-xbblk\") pod \"control-plane-machine-set-operator-78cbb6b69f-twpss\" (UID: \"5b27b30e-8a1e-4c12-ad5a-530c640bf23d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182601 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b526dd5-1496-4542-aecb-c908662ef696-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4mjm\" (UID: \"7b526dd5-1496-4542-aecb-c908662ef696\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182660 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ffc58cf-c5fb-450f-abeb-7fd513919fde-serving-cert\") pod \"service-ca-operator-777779d784-5ntgp\" (UID: \"5ffc58cf-c5fb-450f-abeb-7fd513919fde\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182679 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f7cb04a-b39d-4777-b8d5-8c0741134433-apiservice-cert\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182725 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/759c9cb1-8b38-429f-84a1-6a1c02619cf7-serving-cert\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182745 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-node-bootstrap-token\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182801 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/82674f62-f752-4e34-85e4-fc0678f6aca9-metrics-tls\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182821 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8446e162-cf3d-4afd-8dfc-92b5b6d66d64-signing-key\") pod \"service-ca-9c57cc56f-cb8tr\" (UID: \"8446e162-cf3d-4afd-8dfc-92b5b6d66d64\") " pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182846 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-socket-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182896 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-console-config\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182915 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpxgb\" (UniqueName: \"kubernetes.io/projected/8c3f993d-59c9-444b-9882-cedb07c01c7a-kube-api-access-vpxgb\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182955 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-csi-data-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182976 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd2k9\" (UniqueName: \"kubernetes.io/projected/58d19460-8b4f-467d-9bc8-f591dd79992c-kube-api-access-xd2k9\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.182992 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f7cb04a-b39d-4777-b8d5-8c0741134433-webhook-cert\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183006 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ffc58cf-c5fb-450f-abeb-7fd513919fde-config\") pod \"service-ca-operator-777779d784-5ntgp\" (UID: \"5ffc58cf-c5fb-450f-abeb-7fd513919fde\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183005 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-etcd-ca\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183050 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7thdl\" (UniqueName: \"kubernetes.io/projected/58013dad-1347-4da5-8314-495388d1b5c2-kube-api-access-7thdl\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183079 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9739473-727c-4d34-8083-7a5bccb26be6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2tpnt\" (UID: \"d9739473-727c-4d34-8083-7a5bccb26be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183106 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-66425\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183117 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3f44210f-5f93-426b-852f-1fc6f0e4deb7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dhxrb\" (UID: \"3f44210f-5f93-426b-852f-1fc6f0e4deb7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183147 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfkv9\" (UniqueName: \"kubernetes.io/projected/82674f62-f752-4e34-85e4-fc0678f6aca9-kube-api-access-dfkv9\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183181 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-metrics-tls\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183198 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbdxc\" (UniqueName: \"kubernetes.io/projected/747dd3ff-596b-48dd-a419-43c73dad5bfb-kube-api-access-nbdxc\") pod \"ingress-canary-hnfjx\" (UID: \"747dd3ff-596b-48dd-a419-43c73dad5bfb\") " pod="openshift-ingress-canary/ingress-canary-hnfjx" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183231 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbknj\" (UniqueName: \"kubernetes.io/projected/607f4472-6658-48ef-ba52-4b6b097eaa2e-kube-api-access-xbknj\") pod \"collect-profiles-29466090-q6cvw\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183257 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ea35295-83c1-498b-b190-7dad56fe323b-metrics-certs\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183283 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kmlt\" (UniqueName: \"kubernetes.io/projected/a42bf9ad-8478-4f7b-93aa-623be932ba47-kube-api-access-5kmlt\") pod \"migrator-59844c95c7-46262\" (UID: \"a42bf9ad-8478-4f7b-93aa-623be932ba47\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46262" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183301 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-etcd-service-ca\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183319 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-trusted-ca\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183335 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-certs\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183352 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-trusted-ca-bundle\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.183965 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.184307 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-socket-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.184893 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b85eabc9-9f0c-45f7-941f-e329f3022b74-metrics-tls\") pod \"dns-operator-744455d44c-2ztc2\" (UID: \"b85eabc9-9f0c-45f7-941f-e329f3022b74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.185065 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58d19460-8b4f-467d-9bc8-f591dd79992c-serving-cert\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.185151 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1ab58f80-d33a-4525-8c70-916d566b2521-csi-data-dir\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.185680 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-etcd-service-ca\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.185930 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-trusted-ca\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.186175 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9739473-727c-4d34-8083-7a5bccb26be6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2tpnt\" (UID: \"d9739473-727c-4d34-8083-7a5bccb26be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.186246 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58d19460-8b4f-467d-9bc8-f591dd79992c-trusted-ca\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.186367 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2797691b-7fdf-450b-a02f-429298cf2a70-profile-collector-cert\") pod \"catalog-operator-68c6474976-6s6h2\" (UID: \"2797691b-7fdf-450b-a02f-429298cf2a70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.186528 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/607f4472-6658-48ef-ba52-4b6b097eaa2e-secret-volume\") pod \"collect-profiles-29466090-q6cvw\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.186654 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2797691b-7fdf-450b-a02f-429298cf2a70-srv-cert\") pod \"catalog-operator-68c6474976-6s6h2\" (UID: \"2797691b-7fdf-450b-a02f-429298cf2a70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.186667 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-serving-cert\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.186719 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-config\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.187090 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f61ba90-6fa6-4eb4-a496-d05c70940365-proxy-tls\") pod \"machine-config-controller-84d6567774-hqds7\" (UID: \"8f61ba90-6fa6-4eb4-a496-d05c70940365\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.187534 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9739473-727c-4d34-8083-7a5bccb26be6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2tpnt\" (UID: \"d9739473-727c-4d34-8083-7a5bccb26be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.188648 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-etcd-client\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.188816 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-66425\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.188895 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3f44210f-5f93-426b-852f-1fc6f0e4deb7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dhxrb\" (UID: \"3f44210f-5f93-426b-852f-1fc6f0e4deb7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.189569 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-metrics-tls\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.202499 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.208044 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b526dd5-1496-4542-aecb-c908662ef696-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4mjm\" (UID: \"7b526dd5-1496-4542-aecb-c908662ef696\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.223227 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.242478 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.252283 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b526dd5-1496-4542-aecb-c908662ef696-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4mjm\" (UID: \"7b526dd5-1496-4542-aecb-c908662ef696\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.263439 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.283387 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.283837 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.283958 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.783936658 +0000 UTC m=+141.331776118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.284262 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.284630 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.784619484 +0000 UTC m=+141.332458944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.291382 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e82214-7a8c-4501-afc0-1f7e9d090bcb-config\") pod \"kube-controller-manager-operator-78b949d7b-dpvdp\" (UID: \"69e82214-7a8c-4501-afc0-1f7e9d090bcb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.303392 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.323083 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.336602 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69e82214-7a8c-4501-afc0-1f7e9d090bcb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dpvdp\" (UID: \"69e82214-7a8c-4501-afc0-1f7e9d090bcb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.343980 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.362900 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.377371 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0ea35295-83c1-498b-b190-7dad56fe323b-stats-auth\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.384383 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.385926 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.386117 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.886090496 +0000 UTC m=+141.433929986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.387198 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.387708 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.887693925 +0000 UTC m=+141.435533415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.395605 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0ea35295-83c1-498b-b190-7dad56fe323b-default-certificate\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.402748 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.424002 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.442835 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.450794 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ea35295-83c1-498b-b190-7dad56fe323b-metrics-certs\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.463137 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.482438 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.487883 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.488029 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.988009869 +0000 UTC m=+141.535849319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.488797 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.489114 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:41.989107356 +0000 UTC m=+141.536946796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.492289 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ea35295-83c1-498b-b190-7dad56fe323b-service-ca-bundle\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.503384 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.508373 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f7cb04a-b39d-4777-b8d5-8c0741134433-webhook-cert\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.509619 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f7cb04a-b39d-4777-b8d5-8c0741134433-apiservice-cert\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.522589 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.549647 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.562671 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.574827 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/54c29b9f-4240-4edd-98aa-cd053a66000e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mv7fj\" (UID: \"54c29b9f-4240-4edd-98aa-cd053a66000e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.583525 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.590382 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.590586 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.090563268 +0000 UTC m=+141.638402718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.590985 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.591521 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.091495941 +0000 UTC m=+141.639335431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.602344 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.612323 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/607f4472-6658-48ef-ba52-4b6b097eaa2e-config-volume\") pod \"collect-profiles-29466090-q6cvw\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.623027 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.643227 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.663237 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.682960 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.690182 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8446e162-cf3d-4afd-8dfc-92b5b6d66d64-signing-key\") pod \"service-ca-9c57cc56f-cb8tr\" (UID: \"8446e162-cf3d-4afd-8dfc-92b5b6d66d64\") " pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.692662 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.693039 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.193012424 +0000 UTC m=+141.740851874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.693563 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.693901 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.193888275 +0000 UTC m=+141.741727726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.703248 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.713089 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8446e162-cf3d-4afd-8dfc-92b5b6d66d64-signing-cabundle\") pod \"service-ca-9c57cc56f-cb8tr\" (UID: \"8446e162-cf3d-4afd-8dfc-92b5b6d66d64\") " pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.722782 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.743340 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.761971 4919 request.go:700] Waited for 1.012316795s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0 Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.763493 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.778335 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba7db551-cd6a-4d50-98a5-2d532f893e7a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9gc57\" (UID: \"ba7db551-cd6a-4d50-98a5-2d532f893e7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.782971 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.794845 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.795074 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.29504567 +0000 UTC m=+141.842885120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.795770 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.796141 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.296126527 +0000 UTC m=+141.843965977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.803202 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.809616 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ffc58cf-c5fb-450f-abeb-7fd513919fde-serving-cert\") pod \"service-ca-operator-777779d784-5ntgp\" (UID: \"5ffc58cf-c5fb-450f-abeb-7fd513919fde\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.823358 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.843604 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.845909 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ffc58cf-c5fb-450f-abeb-7fd513919fde-config\") pod \"service-ca-operator-777779d784-5ntgp\" (UID: \"5ffc58cf-c5fb-450f-abeb-7fd513919fde\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.863632 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.877707 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b27b30e-8a1e-4c12-ad5a-530c640bf23d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-twpss\" (UID: \"5b27b30e-8a1e-4c12-ad5a-530c640bf23d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.883710 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.896424 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.896615 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.396589514 +0000 UTC m=+141.944428974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.896687 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.897341 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.397318982 +0000 UTC m=+141.945158442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.917293 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx8x6\" (UniqueName: \"kubernetes.io/projected/48075d37-56ec-4015-a38a-94068ad47148-kube-api-access-mx8x6\") pod \"controller-manager-879f6c89f-ph5g6\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.955065 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dppzc\" (UniqueName: \"kubernetes.io/projected/73189faa-e786-4c46-b23e-c9e58d6b0490-kube-api-access-dppzc\") pod \"machine-api-operator-5694c8668f-7lrzs\" (UID: \"73189faa-e786-4c46-b23e-c9e58d6b0490\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.978062 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nskgv\" (UniqueName: \"kubernetes.io/projected/4ff1b886-d642-44c3-ba90-b1b4cb1379dd-kube-api-access-nskgv\") pod \"apiserver-7bbb656c7d-4dsc8\" (UID: \"4ff1b886-d642-44c3-ba90-b1b4cb1379dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.982665 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnc4\" (UniqueName: \"kubernetes.io/projected/e5975f85-ddfb-4c96-bdc8-da5b3541a769-kube-api-access-xjnc4\") pod \"apiserver-76f77b778f-r8h48\" (UID: \"e5975f85-ddfb-4c96-bdc8-da5b3541a769\") " pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.995384 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65sq5\" (UniqueName: \"kubernetes.io/projected/856b0cf5-5731-4842-be1e-b25bb6426674-kube-api-access-65sq5\") pod \"openshift-config-operator-7777fb866f-vb6hf\" (UID: \"856b0cf5-5731-4842-be1e-b25bb6426674\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.997883 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.998099 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.498070957 +0000 UTC m=+142.045910427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:41 crc kubenswrapper[4919]: I0109 13:32:41.998200 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:41 crc kubenswrapper[4919]: E0109 13:32:41.998739 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.498715463 +0000 UTC m=+142.046554953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.003483 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.014295 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-client-ca\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.023027 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.040985 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.042171 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.051601 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.054420 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c3f993d-59c9-444b-9882-cedb07c01c7a-serving-cert\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.059823 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.063835 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.071992 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.078113 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.083486 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.098950 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.099057 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.599039897 +0000 UTC m=+142.146879347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.099190 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.099489 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.599479808 +0000 UTC m=+142.147319258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.104126 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.113773 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-config\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.123970 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.135349 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/306b1c8e-d6e2-45e3-8444-5150e5a7d346-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6h2sh\" (UID: \"306b1c8e-d6e2-45e3-8444-5150e5a7d346\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.142980 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.150270 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3f44210f-5f93-426b-852f-1fc6f0e4deb7-srv-cert\") pod \"olm-operator-6b444d44fb-dhxrb\" (UID: \"3f44210f-5f93-426b-852f-1fc6f0e4deb7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.163483 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.180986 4919 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.181067 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/747dd3ff-596b-48dd-a419-43c73dad5bfb-cert podName:747dd3ff-596b-48dd-a419-43c73dad5bfb nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.681050167 +0000 UTC m=+142.228889617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/747dd3ff-596b-48dd-a419-43c73dad5bfb-cert") pod "ingress-canary-hnfjx" (UID: "747dd3ff-596b-48dd-a419-43c73dad5bfb") : failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.181102 4919 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.181137 4919 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.181202 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-oauth-serving-cert podName:58013dad-1347-4da5-8314-495388d1b5c2 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.68117083 +0000 UTC m=+142.229010310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-oauth-serving-cert") pod "console-f9d7485db-bffts" (UID: "58013dad-1347-4da5-8314-495388d1b5c2") : failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.181201 4919 configmap.go:193] Couldn't get configMap openshift-console/service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.181276 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-trusted-ca-bundle podName:759c9cb1-8b38-429f-84a1-6a1c02619cf7 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.681255252 +0000 UTC m=+142.229094732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-trusted-ca-bundle") pod "authentication-operator-69f744f599-bln95" (UID: "759c9cb1-8b38-429f-84a1-6a1c02619cf7") : failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.181237 4919 secret.go:188] Couldn't get secret openshift-console/console-oauth-config: failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.181319 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-service-ca podName:58013dad-1347-4da5-8314-495388d1b5c2 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.681300614 +0000 UTC m=+142.229140104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-service-ca") pod "console-f9d7485db-bffts" (UID: "58013dad-1347-4da5-8314-495388d1b5c2") : failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.181384 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-oauth-config podName:58013dad-1347-4da5-8314-495388d1b5c2 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.681372515 +0000 UTC m=+142.229211965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-oauth-config") pod "console-f9d7485db-bffts" (UID: "58013dad-1347-4da5-8314-495388d1b5c2") : failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.182409 4919 secret.go:188] Couldn't get secret openshift-console/console-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.182478 4919 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.182510 4919 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.182514 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-serving-cert podName:58013dad-1347-4da5-8314-495388d1b5c2 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.682488232 +0000 UTC m=+142.230327852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-serving-cert") pod "console-f9d7485db-bffts" (UID: "58013dad-1347-4da5-8314-495388d1b5c2") : failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.182559 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-config podName:759c9cb1-8b38-429f-84a1-6a1c02619cf7 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.682551474 +0000 UTC m=+142.230390924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-config") pod "authentication-operator-69f744f599-bln95" (UID: "759c9cb1-8b38-429f-84a1-6a1c02619cf7") : failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.182577 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/82674f62-f752-4e34-85e4-fc0678f6aca9-config-volume podName:82674f62-f752-4e34-85e4-fc0678f6aca9 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.682568034 +0000 UTC m=+142.230407484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/82674f62-f752-4e34-85e4-fc0678f6aca9-config-volume") pod "dns-default-vplw6" (UID: "82674f62-f752-4e34-85e4-fc0678f6aca9") : failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.182606 4919 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.182676 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-service-ca-bundle podName:759c9cb1-8b38-429f-84a1-6a1c02619cf7 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.682659757 +0000 UTC m=+142.230499237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-service-ca-bundle") pod "authentication-operator-69f744f599-bln95" (UID: "759c9cb1-8b38-429f-84a1-6a1c02619cf7") : failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.183438 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.183594 4919 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.183628 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-trusted-ca-bundle podName:58013dad-1347-4da5-8314-495388d1b5c2 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.68362112 +0000 UTC m=+142.231460570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-trusted-ca-bundle") pod "console-f9d7485db-bffts" (UID: "58013dad-1347-4da5-8314-495388d1b5c2") : failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.185142 4919 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.185164 4919 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.185194 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/759c9cb1-8b38-429f-84a1-6a1c02619cf7-serving-cert podName:759c9cb1-8b38-429f-84a1-6a1c02619cf7 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.685185698 +0000 UTC m=+142.233025148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/759c9cb1-8b38-429f-84a1-6a1c02619cf7-serving-cert") pod "authentication-operator-69f744f599-bln95" (UID: "759c9cb1-8b38-429f-84a1-6a1c02619cf7") : failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.185142 4919 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.185252 4919 configmap.go:193] Couldn't get configMap openshift-console/console-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.185273 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-node-bootstrap-token podName:50176685-fd3c-4f19-8a96-ebc4957ca412 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.685251899 +0000 UTC m=+142.233091379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-node-bootstrap-token") pod "machine-config-server-nlnnr" (UID: "50176685-fd3c-4f19-8a96-ebc4957ca412") : failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.185308 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82674f62-f752-4e34-85e4-fc0678f6aca9-metrics-tls podName:82674f62-f752-4e34-85e4-fc0678f6aca9 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.6852935 +0000 UTC m=+142.233132990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/82674f62-f752-4e34-85e4-fc0678f6aca9-metrics-tls") pod "dns-default-vplw6" (UID: "82674f62-f752-4e34-85e4-fc0678f6aca9") : failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.185373 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-console-config podName:58013dad-1347-4da5-8314-495388d1b5c2 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.685345752 +0000 UTC m=+142.233185262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-console-config") pod "console-f9d7485db-bffts" (UID: "58013dad-1347-4da5-8314-495388d1b5c2") : failed to sync configmap cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.185690 4919 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.185725 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-certs podName:50176685-fd3c-4f19-8a96-ebc4957ca412 nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.685714281 +0000 UTC m=+142.233553731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-certs") pod "machine-config-server-nlnnr" (UID: "50176685-fd3c-4f19-8a96-ebc4957ca412") : failed to sync secret cache: timed out waiting for the condition Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.200402 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.200627 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.700613472 +0000 UTC m=+142.248452932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.200943 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.201383 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.701371491 +0000 UTC m=+142.249210951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.203415 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.233427 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.242799 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.262918 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.283638 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.302464 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.302671 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.802636658 +0000 UTC m=+142.350476148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.303006 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.303392 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.303977 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.80394893 +0000 UTC m=+142.351788420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.324207 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.354079 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.364263 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.383537 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.403521 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.405192 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.405437 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.905401952 +0000 UTC m=+142.453241432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.406103 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.406641 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:42.906625762 +0000 UTC m=+142.454465252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.422892 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.443508 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.463970 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.483657 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.504346 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.507688 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.507845 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.007824717 +0000 UTC m=+142.555664167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.508790 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.509154 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.009145959 +0000 UTC m=+142.556985409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.543815 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.564099 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.584146 4919 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.604785 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.610101 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.610431 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.110392297 +0000 UTC m=+142.658231787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.611060 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.611690 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.111665207 +0000 UTC m=+142.659504697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.622945 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.644190 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.664602 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.683723 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.705601 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.713194 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.713493 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.213445527 +0000 UTC m=+142.761285027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.713680 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-oauth-serving-cert\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.713832 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-service-ca\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.714053 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-serving-cert\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.714303 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-config\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.714414 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82674f62-f752-4e34-85e4-fc0678f6aca9-config-volume\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.714691 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-service-ca-bundle\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.714954 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/759c9cb1-8b38-429f-84a1-6a1c02619cf7-serving-cert\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.715025 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-node-bootstrap-token\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.715110 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/82674f62-f752-4e34-85e4-fc0678f6aca9-metrics-tls\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.715206 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-console-config\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.715564 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-certs\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.715635 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-trusted-ca-bundle\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.715753 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-oauth-config\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.715758 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-service-ca\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.715809 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/747dd3ff-596b-48dd-a419-43c73dad5bfb-cert\") pod \"ingress-canary-hnfjx\" (UID: \"747dd3ff-596b-48dd-a419-43c73dad5bfb\") " pod="openshift-ingress-canary/ingress-canary-hnfjx" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.715916 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.716006 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.716085 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-oauth-serving-cert\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.716460 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-service-ca-bundle\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.716472 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-config\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.716706 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.216687856 +0000 UTC m=+142.764527346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.717799 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-console-config\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.719154 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-trusted-ca-bundle\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.720502 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82674f62-f752-4e34-85e4-fc0678f6aca9-config-volume\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.727072 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/759c9cb1-8b38-429f-84a1-6a1c02619cf7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.730093 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.730617 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/759c9cb1-8b38-429f-84a1-6a1c02619cf7-serving-cert\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.730682 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/82674f62-f752-4e34-85e4-fc0678f6aca9-metrics-tls\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.731558 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-node-bootstrap-token\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.735005 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-serving-cert\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.735832 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/747dd3ff-596b-48dd-a419-43c73dad5bfb-cert\") pod \"ingress-canary-hnfjx\" (UID: \"747dd3ff-596b-48dd-a419-43c73dad5bfb\") " pod="openshift-ingress-canary/ingress-canary-hnfjx" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.739344 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-oauth-config\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.744309 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/50176685-fd3c-4f19-8a96-ebc4957ca412-certs\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.779011 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gdpn\" (UniqueName: \"kubernetes.io/projected/1969d897-8c50-47f9-90eb-4e9995d3b8d0-kube-api-access-2gdpn\") pod \"machine-approver-56656f9798-sd2mk\" (UID: \"1969d897-8c50-47f9-90eb-4e9995d3b8d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.781416 4919 request.go:700] Waited for 1.709287429s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.788889 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.800845 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrhcv\" (UniqueName: \"kubernetes.io/projected/077f68de-b2f6-4bbb-8702-81523f9dc7ab-kube-api-access-mrhcv\") pod \"downloads-7954f5f757-sjvr2\" (UID: \"077f68de-b2f6-4bbb-8702-81523f9dc7ab\") " pod="openshift-console/downloads-7954f5f757-sjvr2" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.805619 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt4tk\" (UniqueName: \"kubernetes.io/projected/e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8-kube-api-access-bt4tk\") pod \"machine-config-operator-74547568cd-g2956\" (UID: \"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.818004 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.819024 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.318997539 +0000 UTC m=+142.866836989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.831485 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c544528-982d-44c6-bdb9-9fde7a83be80-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.842101 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkb25\" (UniqueName: \"kubernetes.io/projected/34f52914-d7ad-4273-870a-d1be6c03b766-kube-api-access-gkb25\") pod \"openshift-apiserver-operator-796bbdcf4f-l22qw\" (UID: \"34f52914-d7ad-4273-870a-d1be6c03b766\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.846057 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sjvr2" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.860847 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6cm\" (UniqueName: \"kubernetes.io/projected/840f8ce4-e7b0-4def-b619-2a4252624256-kube-api-access-lt6cm\") pod \"openshift-controller-manager-operator-756b6f6bc6-bhv5s\" (UID: \"840f8ce4-e7b0-4def-b619-2a4252624256\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.883933 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw5wl\" (UniqueName: \"kubernetes.io/projected/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-kube-api-access-gw5wl\") pod \"oauth-openshift-558db77b4-s2tz5\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.906962 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.908567 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-bound-sa-token\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.920356 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:42 crc kubenswrapper[4919]: E0109 13:32:42.921056 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.421041575 +0000 UTC m=+142.968881015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.922723 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7b28\" (UniqueName: \"kubernetes.io/projected/6c544528-982d-44c6-bdb9-9fde7a83be80-kube-api-access-g7b28\") pod \"cluster-image-registry-operator-dc59b4c8b-w2b9q\" (UID: \"6c544528-982d-44c6-bdb9-9fde7a83be80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.942963 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68296f23-034f-4dfd-bb8d-879beafa7ad0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nmjx4\" (UID: \"68296f23-034f-4dfd-bb8d-879beafa7ad0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.961105 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rj4\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-kube-api-access-h4rj4\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.963756 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.966655 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r8h48"] Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.981305 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t68m\" (UniqueName: \"kubernetes.io/projected/5ffc58cf-c5fb-450f-abeb-7fd513919fde-kube-api-access-9t68m\") pod \"service-ca-operator-777779d784-5ntgp\" (UID: \"5ffc58cf-c5fb-450f-abeb-7fd513919fde\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:42 crc kubenswrapper[4919]: W0109 13:32:42.986585 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5975f85_ddfb_4c96_bdc8_da5b3541a769.slice/crio-bf4a25979254ae12ef73f094285ada3ce757cfc5ae7da438d7e94df2e454f6fc WatchSource:0}: Error finding container bf4a25979254ae12ef73f094285ada3ce757cfc5ae7da438d7e94df2e454f6fc: Status 404 returned error can't find the container with id bf4a25979254ae12ef73f094285ada3ce757cfc5ae7da438d7e94df2e454f6fc Jan 09 13:32:42 crc kubenswrapper[4919]: I0109 13:32:42.997469 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vlcl\" (UniqueName: \"kubernetes.io/projected/50176685-fd3c-4f19-8a96-ebc4957ca412-kube-api-access-8vlcl\") pod \"machine-config-server-nlnnr\" (UID: \"50176685-fd3c-4f19-8a96-ebc4957ca412\") " pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.018600 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9g7l\" (UniqueName: \"kubernetes.io/projected/2797691b-7fdf-450b-a02f-429298cf2a70-kube-api-access-z9g7l\") pod \"catalog-operator-68c6474976-6s6h2\" (UID: \"2797691b-7fdf-450b-a02f-429298cf2a70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.021467 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.021626 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.521598605 +0000 UTC m=+143.069438055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.021868 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.022103 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.022445 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.522421405 +0000 UTC m=+143.070260855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.039110 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.043935 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq6vc\" (UniqueName: \"kubernetes.io/projected/1ab58f80-d33a-4525-8c70-916d566b2521-kube-api-access-fq6vc\") pod \"csi-hostpathplugin-xwk7t\" (UID: \"1ab58f80-d33a-4525-8c70-916d566b2521\") " pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.054946 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ph5g6"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.057954 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.067058 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22jl\" (UniqueName: \"kubernetes.io/projected/759c9cb1-8b38-429f-84a1-6a1c02619cf7-kube-api-access-f22jl\") pod \"authentication-operator-69f744f599-bln95\" (UID: \"759c9cb1-8b38-429f-84a1-6a1c02619cf7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.078827 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.085407 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9739473-727c-4d34-8083-7a5bccb26be6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2tpnt\" (UID: \"d9739473-727c-4d34-8083-7a5bccb26be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.098405 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4x4q\" (UniqueName: \"kubernetes.io/projected/306b1c8e-d6e2-45e3-8444-5150e5a7d346-kube-api-access-b4x4q\") pod \"multus-admission-controller-857f4d67dd-6h2sh\" (UID: \"306b1c8e-d6e2-45e3-8444-5150e5a7d346\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.104119 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sjvr2"] Jan 09 13:32:43 crc kubenswrapper[4919]: W0109 13:32:43.124724 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48075d37_56ec_4015_a38a_94068ad47148.slice/crio-8c396010645ed73e4010529c1327eefcff5bb1ba5b5fab6ce0a66cd20cd3d61b WatchSource:0}: Error finding container 8c396010645ed73e4010529c1327eefcff5bb1ba5b5fab6ce0a66cd20cd3d61b: Status 404 returned error can't find the container with id 8c396010645ed73e4010529c1327eefcff5bb1ba5b5fab6ce0a66cd20cd3d61b Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.128984 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.130461 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nlnnr" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.132420 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bvz7\" (UniqueName: \"kubernetes.io/projected/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-kube-api-access-2bvz7\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.134833 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.135368 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.635345175 +0000 UTC m=+143.183184615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.135987 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.136551 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.636527534 +0000 UTC m=+143.184366984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.150920 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.156807 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b26729a1-f6f4-44c6-9d39-b5b5e64104bc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-h5zhd\" (UID: \"b26729a1-f6f4-44c6-9d39-b5b5e64104bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.163817 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm9v9\" (UniqueName: \"kubernetes.io/projected/8446e162-cf3d-4afd-8dfc-92b5b6d66d64-kube-api-access-tm9v9\") pod \"service-ca-9c57cc56f-cb8tr\" (UID: \"8446e162-cf3d-4afd-8dfc-92b5b6d66d64\") " pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.184956 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd6l4\" (UniqueName: \"kubernetes.io/projected/3f44210f-5f93-426b-852f-1fc6f0e4deb7-kube-api-access-kd6l4\") pod \"olm-operator-6b444d44fb-dhxrb\" (UID: \"3f44210f-5f93-426b-852f-1fc6f0e4deb7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.191636 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.200629 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.201551 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69e82214-7a8c-4501-afc0-1f7e9d090bcb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dpvdp\" (UID: \"69e82214-7a8c-4501-afc0-1f7e9d090bcb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.206416 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g2956"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.210454 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7lrzs"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.211837 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8"] Jan 09 13:32:43 crc kubenswrapper[4919]: W0109 13:32:43.212278 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod856b0cf5_5731_4842_be1e_b25bb6426674.slice/crio-099e3f00e780e15f84155b3a61641c991f31cba1e75f49e2d01479cfb9980f40 WatchSource:0}: Error finding container 099e3f00e780e15f84155b3a61641c991f31cba1e75f49e2d01479cfb9980f40: Status 404 returned error can't find the container with id 099e3f00e780e15f84155b3a61641c991f31cba1e75f49e2d01479cfb9980f40 Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.215679 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.223582 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.240154 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.240866 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.740847956 +0000 UTC m=+143.288687406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.242057 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfqg9\" (UniqueName: \"kubernetes.io/projected/bdd22a6f-64ba-4cc7-9cb0-8e62250a9001-kube-api-access-cfqg9\") pod \"etcd-operator-b45778765-t7d9m\" (UID: \"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.242625 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.254810 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.259791 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p5xc\" (UniqueName: \"kubernetes.io/projected/7b526dd5-1496-4542-aecb-c908662ef696-kube-api-access-2p5xc\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4mjm\" (UID: \"7b526dd5-1496-4542-aecb-c908662ef696\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.259901 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h7gm\" (UniqueName: \"kubernetes.io/projected/73f4afd2-691f-4749-b361-d99c9482a35b-kube-api-access-4h7gm\") pod \"marketplace-operator-79b997595-66425\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.273507 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.279063 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.280908 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7rhq\" (UniqueName: \"kubernetes.io/projected/9f7cb04a-b39d-4777-b8d5-8c0741134433-kube-api-access-n7rhq\") pod \"packageserver-d55dfcdfc-dwxcs\" (UID: \"9f7cb04a-b39d-4777-b8d5-8c0741134433\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.286639 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" Jan 09 13:32:43 crc kubenswrapper[4919]: W0109 13:32:43.292889 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73189faa_e786_4c46_b23e_c9e58d6b0490.slice/crio-164a1f36bd356f8cc5eeefd2dbd8bee68a3a43314be4ac9d4cb489149c121108 WatchSource:0}: Error finding container 164a1f36bd356f8cc5eeefd2dbd8bee68a3a43314be4ac9d4cb489149c121108: Status 404 returned error can't find the container with id 164a1f36bd356f8cc5eeefd2dbd8bee68a3a43314be4ac9d4cb489149c121108 Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.300239 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.301453 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzhcb\" (UniqueName: \"kubernetes.io/projected/0ea35295-83c1-498b-b190-7dad56fe323b-kube-api-access-rzhcb\") pod \"router-default-5444994796-jx754\" (UID: \"0ea35295-83c1-498b-b190-7dad56fe323b\") " pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.323968 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.325642 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzjkr\" (UniqueName: \"kubernetes.io/projected/ba7db551-cd6a-4d50-98a5-2d532f893e7a-kube-api-access-dzjkr\") pod \"package-server-manager-789f6589d5-9gc57\" (UID: \"ba7db551-cd6a-4d50-98a5-2d532f893e7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.331815 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.344670 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.345064 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.845043984 +0000 UTC m=+143.392883434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: W0109 13:32:43.351882 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840f8ce4_e7b0_4def_b619_2a4252624256.slice/crio-5f1a1d30cc66f5f762472273c7160a429dcda893a87a05bde42f4a98faedf1f3 WatchSource:0}: Error finding container 5f1a1d30cc66f5f762472273c7160a429dcda893a87a05bde42f4a98faedf1f3: Status 404 returned error can't find the container with id 5f1a1d30cc66f5f762472273c7160a429dcda893a87a05bde42f4a98faedf1f3 Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.353129 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56p9\" (UniqueName: \"kubernetes.io/projected/8f61ba90-6fa6-4eb4-a496-d05c70940365-kube-api-access-d56p9\") pod \"machine-config-controller-84d6567774-hqds7\" (UID: \"8f61ba90-6fa6-4eb4-a496-d05c70940365\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.366205 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.369823 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j77wn\" (UniqueName: \"kubernetes.io/projected/b85eabc9-9f0c-45f7-941f-e329f3022b74-kube-api-access-j77wn\") pod \"dns-operator-744455d44c-2ztc2\" (UID: \"b85eabc9-9f0c-45f7-941f-e329f3022b74\") " pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.377989 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.378276 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.387240 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfkv9\" (UniqueName: \"kubernetes.io/projected/82674f62-f752-4e34-85e4-fc0678f6aca9-kube-api-access-dfkv9\") pod \"dns-default-vplw6\" (UID: \"82674f62-f752-4e34-85e4-fc0678f6aca9\") " pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.395017 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.401352 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbblk\" (UniqueName: \"kubernetes.io/projected/5b27b30e-8a1e-4c12-ad5a-530c640bf23d-kube-api-access-xbblk\") pod \"control-plane-machine-set-operator-78cbb6b69f-twpss\" (UID: \"5b27b30e-8a1e-4c12-ad5a-530c640bf23d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.428131 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpxgb\" (UniqueName: \"kubernetes.io/projected/8c3f993d-59c9-444b-9882-cedb07c01c7a-kube-api-access-vpxgb\") pod \"route-controller-manager-6576b87f9c-5q4vb\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.439476 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2tz5"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.443084 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7thdl\" (UniqueName: \"kubernetes.io/projected/58013dad-1347-4da5-8314-495388d1b5c2-kube-api-access-7thdl\") pod \"console-f9d7485db-bffts\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.445827 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.446181 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.446399 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.946380603 +0000 UTC m=+143.494220053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.446772 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.447009 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.448176 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:43.948148936 +0000 UTC m=+143.495988386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.466096 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxfsx\" (UniqueName: \"kubernetes.io/projected/54c29b9f-4240-4edd-98aa-cd053a66000e-kube-api-access-wxfsx\") pod \"cluster-samples-operator-665b6dd947-mv7fj\" (UID: \"54c29b9f-4240-4edd-98aa-cd053a66000e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.466345 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.481552 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd2k9\" (UniqueName: \"kubernetes.io/projected/58d19460-8b4f-467d-9bc8-f591dd79992c-kube-api-access-xd2k9\") pod \"console-operator-58897d9998-ttlpm\" (UID: \"58d19460-8b4f-467d-9bc8-f591dd79992c\") " pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.500378 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbknj\" (UniqueName: \"kubernetes.io/projected/607f4472-6658-48ef-ba52-4b6b097eaa2e-kube-api-access-xbknj\") pod \"collect-profiles-29466090-q6cvw\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.520393 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbdxc\" (UniqueName: \"kubernetes.io/projected/747dd3ff-596b-48dd-a419-43c73dad5bfb-kube-api-access-nbdxc\") pod \"ingress-canary-hnfjx\" (UID: \"747dd3ff-596b-48dd-a419-43c73dad5bfb\") " pod="openshift-ingress-canary/ingress-canary-hnfjx" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.524404 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwk7t"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.540975 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.541578 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kmlt\" (UniqueName: \"kubernetes.io/projected/a42bf9ad-8478-4f7b-93aa-623be932ba47-kube-api-access-5kmlt\") pod \"migrator-59844c95c7-46262\" (UID: \"a42bf9ad-8478-4f7b-93aa-623be932ba47\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46262" Jan 09 13:32:43 crc kubenswrapper[4919]: W0109 13:32:43.541881 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34f52914_d7ad_4273_870a_d1be6c03b766.slice/crio-14d409aa0651abf9a2dc71007a7c0cd7c6bab25ed587d78e841da47e4ca2d0bc WatchSource:0}: Error finding container 14d409aa0651abf9a2dc71007a7c0cd7c6bab25ed587d78e841da47e4ca2d0bc: Status 404 returned error can't find the container with id 14d409aa0651abf9a2dc71007a7c0cd7c6bab25ed587d78e841da47e4ca2d0bc Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.548173 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.548310 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.048286086 +0000 UTC m=+143.596125536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.548682 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.549003 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.048994143 +0000 UTC m=+143.596833583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.556183 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.593585 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.614286 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.615633 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.650012 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.650266 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.15023259 +0000 UTC m=+143.698072040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.650333 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.651064 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.15105349 +0000 UTC m=+143.698892940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.654495 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.657999 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.687663 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.724600 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hnfjx" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.751854 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.752377 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd"] Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.752501 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.252484061 +0000 UTC m=+143.800323511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.758070 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.768304 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nlnnr" event={"ID":"50176685-fd3c-4f19-8a96-ebc4957ca412","Type":"ContainerStarted","Data":"bd62832b47e36b31142925cc446ba3989897cd83025940597b3de81e31ac6ed9"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.773756 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" event={"ID":"1969d897-8c50-47f9-90eb-4e9995d3b8d0","Type":"ContainerStarted","Data":"7c7bff77f9e51d3b3580bd463bed30d7a4b874d13deb1ff7013708bcb7e641b4"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.773783 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" event={"ID":"1969d897-8c50-47f9-90eb-4e9995d3b8d0","Type":"ContainerStarted","Data":"82b5bded974341fda87b9083c2025a7465877f6357d0e21cd531f1e46dfaa9b6"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.776770 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" event={"ID":"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8","Type":"ContainerStarted","Data":"38ce73aacab7060a809debed09f08214d24fc1bbd802c784c2cc04add5f69c0f"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.778645 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" event={"ID":"4ff1b886-d642-44c3-ba90-b1b4cb1379dd","Type":"ContainerStarted","Data":"64e97f75ab66d438ae2445cadde9372659064c9b2dd637e8a4dd0bca76e19ace"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.785276 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" event={"ID":"1ab58f80-d33a-4525-8c70-916d566b2521","Type":"ContainerStarted","Data":"02d864a32da564dddd082246bbc1823f8adcded1743f9c2161cd32de989d41aa"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.796003 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bln95"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.829931 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.832032 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46262" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.832605 4919 generic.go:334] "Generic (PLEG): container finished" podID="e5975f85-ddfb-4c96-bdc8-da5b3541a769" containerID="3192818d2763bd20e7f7f1f4c4da0803c5d102d3d81ecb56ed80c03808655a52" exitCode=0 Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.832845 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" event={"ID":"e5975f85-ddfb-4c96-bdc8-da5b3541a769","Type":"ContainerDied","Data":"3192818d2763bd20e7f7f1f4c4da0803c5d102d3d81ecb56ed80c03808655a52"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.832882 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" event={"ID":"e5975f85-ddfb-4c96-bdc8-da5b3541a769","Type":"ContainerStarted","Data":"bf4a25979254ae12ef73f094285ada3ce757cfc5ae7da438d7e94df2e454f6fc"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.871176 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.871817 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.371804827 +0000 UTC m=+143.919644277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.890708 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" event={"ID":"840f8ce4-e7b0-4def-b619-2a4252624256","Type":"ContainerStarted","Data":"5f1a1d30cc66f5f762472273c7160a429dcda893a87a05bde42f4a98faedf1f3"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.909346 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" event={"ID":"856b0cf5-5731-4842-be1e-b25bb6426674","Type":"ContainerStarted","Data":"0abbc6dd0a03b37d9e50f9096c9936d32a5f026f2f26a90f8dffad684745fe49"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.909392 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" event={"ID":"856b0cf5-5731-4842-be1e-b25bb6426674","Type":"ContainerStarted","Data":"099e3f00e780e15f84155b3a61641c991f31cba1e75f49e2d01479cfb9980f40"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.914535 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.923151 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" event={"ID":"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403","Type":"ContainerStarted","Data":"87577580de755238075e78750f0f03636007e0259d098a8f1ae2bed732b9fed1"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.930945 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" event={"ID":"73189faa-e786-4c46-b23e-c9e58d6b0490","Type":"ContainerStarted","Data":"164a1f36bd356f8cc5eeefd2dbd8bee68a3a43314be4ac9d4cb489149c121108"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.946994 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.949732 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2"] Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.953043 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" event={"ID":"48075d37-56ec-4015-a38a-94068ad47148","Type":"ContainerStarted","Data":"2b5f9a0384810e48712eb27a6d7178a64c8a39901cb8674a7ea90dc51729cea8"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.953183 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" event={"ID":"48075d37-56ec-4015-a38a-94068ad47148","Type":"ContainerStarted","Data":"8c396010645ed73e4010529c1327eefcff5bb1ba5b5fab6ce0a66cd20cd3d61b"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.954103 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.961134 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" event={"ID":"34f52914-d7ad-4273-870a-d1be6c03b766","Type":"ContainerStarted","Data":"14d409aa0651abf9a2dc71007a7c0cd7c6bab25ed587d78e841da47e4ca2d0bc"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.965095 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.966932 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" event={"ID":"5ffc58cf-c5fb-450f-abeb-7fd513919fde","Type":"ContainerStarted","Data":"6c040bf7d2c48adaf76ef407ab896a470b5be3db59b393ce67d85d877132965b"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.972157 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.972435 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.472404998 +0000 UTC m=+144.020244438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.972482 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:43 crc kubenswrapper[4919]: E0109 13:32:43.972852 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.472843499 +0000 UTC m=+144.020682949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.978909 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sjvr2" event={"ID":"077f68de-b2f6-4bbb-8702-81523f9dc7ab","Type":"ContainerStarted","Data":"713306f551cff35378bf51520fd893b1b0687578cc3baa9f7d5fd1a72254916f"} Jan 09 13:32:43 crc kubenswrapper[4919]: I0109 13:32:43.978974 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sjvr2" event={"ID":"077f68de-b2f6-4bbb-8702-81523f9dc7ab","Type":"ContainerStarted","Data":"597d0684fc2e235249526bc9bc2006a3872c7105c11ef867d0421039c38e5445"} Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.075663 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:44 crc kubenswrapper[4919]: E0109 13:32:44.075830 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.575802938 +0000 UTC m=+144.123642378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.076162 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:44 crc kubenswrapper[4919]: E0109 13:32:44.077444 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.577428047 +0000 UTC m=+144.125267497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.177597 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:44 crc kubenswrapper[4919]: E0109 13:32:44.178829 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.678811227 +0000 UTC m=+144.226650677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:44 crc kubenswrapper[4919]: W0109 13:32:44.239765 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9739473_727c_4d34_8083_7a5bccb26be6.slice/crio-83d80b6497f006497f4d98c2c20bcffe4af3dbad79b067a9c329a94bbf2b58b5 WatchSource:0}: Error finding container 83d80b6497f006497f4d98c2c20bcffe4af3dbad79b067a9c329a94bbf2b58b5: Status 404 returned error can't find the container with id 83d80b6497f006497f4d98c2c20bcffe4af3dbad79b067a9c329a94bbf2b58b5 Jan 09 13:32:44 crc kubenswrapper[4919]: W0109 13:32:44.253353 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2797691b_7fdf_450b_a02f_429298cf2a70.slice/crio-64252e3879a04eb086b8aabc5b55193b360623bb244dc9169e25e86eb058cb05 WatchSource:0}: Error finding container 64252e3879a04eb086b8aabc5b55193b360623bb244dc9169e25e86eb058cb05: Status 404 returned error can't find the container with id 64252e3879a04eb086b8aabc5b55193b360623bb244dc9169e25e86eb058cb05 Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.282430 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:44 crc kubenswrapper[4919]: E0109 13:32:44.282782 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.78276739 +0000 UTC m=+144.330606830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.387360 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:44 crc kubenswrapper[4919]: E0109 13:32:44.387728 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.887711817 +0000 UTC m=+144.435551267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.397787 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.488676 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:44 crc kubenswrapper[4919]: E0109 13:32:44.489329 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:44.989317272 +0000 UTC m=+144.537156722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.492256 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t7d9m"] Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.590045 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:44 crc kubenswrapper[4919]: E0109 13:32:44.590566 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:45.090527678 +0000 UTC m=+144.638367128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.621750 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm"] Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.694182 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.694597 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6h2sh"] Jan 09 13:32:44 crc kubenswrapper[4919]: E0109 13:32:44.694641 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:45.194626825 +0000 UTC m=+144.742466275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:44 crc kubenswrapper[4919]: W0109 13:32:44.775505 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b526dd5_1496_4542_aecb_c908662ef696.slice/crio-bea60f46ef4e3311786f205ce0228c412c3d39262a449c8013ddeaa3edf14be7 WatchSource:0}: Error finding container bea60f46ef4e3311786f205ce0228c412c3d39262a449c8013ddeaa3edf14be7: Status 404 returned error can't find the container with id bea60f46ef4e3311786f205ce0228c412c3d39262a449c8013ddeaa3edf14be7 Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.796304 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:44 crc kubenswrapper[4919]: E0109 13:32:44.796858 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:45.296834825 +0000 UTC m=+144.844674275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:44 crc kubenswrapper[4919]: W0109 13:32:44.822381 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod306b1c8e_d6e2_45e3_8444_5150e5a7d346.slice/crio-324bfcc8ea1a0dd0d4e00fbe237acb8b15067024f8d3a4ab246ad1e635729acc WatchSource:0}: Error finding container 324bfcc8ea1a0dd0d4e00fbe237acb8b15067024f8d3a4ab246ad1e635729acc: Status 404 returned error can't find the container with id 324bfcc8ea1a0dd0d4e00fbe237acb8b15067024f8d3a4ab246ad1e635729acc Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.859057 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw"] Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.859358 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp"] Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.901899 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:44 crc kubenswrapper[4919]: E0109 13:32:44.902431 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:45.402414117 +0000 UTC m=+144.950253567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:44 crc kubenswrapper[4919]: W0109 13:32:44.908081 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod607f4472_6658_48ef_ba52_4b6b097eaa2e.slice/crio-73aa2e564e4bb0388ae9699406b7b1db6f76998b0f1af61db3198ef01ab7575d WatchSource:0}: Error finding container 73aa2e564e4bb0388ae9699406b7b1db6f76998b0f1af61db3198ef01ab7575d: Status 404 returned error can't find the container with id 73aa2e564e4bb0388ae9699406b7b1db6f76998b0f1af61db3198ef01ab7575d Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.940937 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs"] Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.945436 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cb8tr"] Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.948403 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57"] Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.962840 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vplw6"] Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.978602 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ttlpm"] Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.980887 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj"] Jan 09 13:32:44 crc kubenswrapper[4919]: I0109 13:32:44.982227 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7"] Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.004842 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.005278 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:45.505255252 +0000 UTC m=+145.053094702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.064250 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bffts"] Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.107480 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.108342 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:45.608327283 +0000 UTC m=+145.156166733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.111570 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" event={"ID":"840f8ce4-e7b0-4def-b619-2a4252624256","Type":"ContainerStarted","Data":"41f75fdb1d12b60699199e9b7401b00c5b6c04fb6de9c72e247bd098fdedfdc1"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.143749 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jx754" event={"ID":"0ea35295-83c1-498b-b190-7dad56fe323b","Type":"ContainerStarted","Data":"e86fb75d3ba9ddcfe6e06b598bc2843559a253024b925045e84bcdf240bdaa1a"} Jan 09 13:32:45 crc kubenswrapper[4919]: W0109 13:32:45.172899 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58d19460_8b4f_467d_9bc8_f591dd79992c.slice/crio-488ffe8ebeb87417115751e7e08535079b952ae13fb5a9a9f14dc96ee39ea37d WatchSource:0}: Error finding container 488ffe8ebeb87417115751e7e08535079b952ae13fb5a9a9f14dc96ee39ea37d: Status 404 returned error can't find the container with id 488ffe8ebeb87417115751e7e08535079b952ae13fb5a9a9f14dc96ee39ea37d Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.182945 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2ztc2"] Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.194229 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" event={"ID":"5ffc58cf-c5fb-450f-abeb-7fd513919fde","Type":"ContainerStarted","Data":"da1f5c8295043ca52f209cec41167a2698e14e9620dd0db0533a7dda6a9ada54"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.203160 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66425"] Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.214179 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.215346 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:45.715327029 +0000 UTC m=+145.263166479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.222436 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nlnnr" event={"ID":"50176685-fd3c-4f19-8a96-ebc4957ca412","Type":"ContainerStarted","Data":"04cad422dc0bfd993399ba406c2dea36fedecfddf316e761718a555283d1cd86"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.233755 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" podStartSLOduration=120.233726906 podStartE2EDuration="2m0.233726906s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:45.211370743 +0000 UTC m=+144.759210203" watchObservedRunningTime="2026-01-09 13:32:45.233726906 +0000 UTC m=+144.781566366" Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.245109 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb"] Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.255938 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" event={"ID":"d9739473-727c-4d34-8083-7a5bccb26be6","Type":"ContainerStarted","Data":"83d80b6497f006497f4d98c2c20bcffe4af3dbad79b067a9c329a94bbf2b58b5"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.275695 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-sjvr2" podStartSLOduration=120.275669274 podStartE2EDuration="2m0.275669274s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:45.256321014 +0000 UTC m=+144.804160464" watchObservedRunningTime="2026-01-09 13:32:45.275669274 +0000 UTC m=+144.823508724" Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.310087 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bhv5s" podStartSLOduration=120.310060008 podStartE2EDuration="2m0.310060008s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:45.306686696 +0000 UTC m=+144.854526136" watchObservedRunningTime="2026-01-09 13:32:45.310060008 +0000 UTC m=+144.857899458" Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.318526 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.319388 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:45.819365994 +0000 UTC m=+145.367205444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: W0109 13:32:45.327774 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82674f62_f752_4e34_85e4_fc0678f6aca9.slice/crio-da8d987217893a0881db9d036ed692a337687b09ba016c5d670a10333605b2a6 WatchSource:0}: Error finding container da8d987217893a0881db9d036ed692a337687b09ba016c5d670a10333605b2a6: Status 404 returned error can't find the container with id da8d987217893a0881db9d036ed692a337687b09ba016c5d670a10333605b2a6 Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.328384 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" event={"ID":"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001","Type":"ContainerStarted","Data":"42cb2fa241adc90a3ac6e432b8d66c7e5c4b7555534393a17f707fab3a75e96f"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.466412 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.466754 4919 generic.go:334] "Generic (PLEG): container finished" podID="856b0cf5-5731-4842-be1e-b25bb6426674" containerID="0abbc6dd0a03b37d9e50f9096c9936d32a5f026f2f26a90f8dffad684745fe49" exitCode=0 Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.466565 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:45.966539956 +0000 UTC m=+145.514379406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.466873 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.467000 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" event={"ID":"856b0cf5-5731-4842-be1e-b25bb6426674","Type":"ContainerDied","Data":"0abbc6dd0a03b37d9e50f9096c9936d32a5f026f2f26a90f8dffad684745fe49"} Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.467245 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:45.967231072 +0000 UTC m=+145.515070522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.478713 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" event={"ID":"3f44210f-5f93-426b-852f-1fc6f0e4deb7","Type":"ContainerStarted","Data":"771e91c193c1badc46350017ba676b8c0905fb87459a4633f410e9935e1ea850"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.487422 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.525059 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" event={"ID":"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8","Type":"ContainerStarted","Data":"f939de1170c0ee6c67146cf707b16cc4cafb920c4144c9026dbde5018e11ba98"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.553095 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" event={"ID":"607f4472-6658-48ef-ba52-4b6b097eaa2e","Type":"ContainerStarted","Data":"73aa2e564e4bb0388ae9699406b7b1db6f76998b0f1af61db3198ef01ab7575d"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.568169 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.568395 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.068347326 +0000 UTC m=+145.616186776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.568979 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.570371 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.070352095 +0000 UTC m=+145.618191545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.581629 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" event={"ID":"1969d897-8c50-47f9-90eb-4e9995d3b8d0","Type":"ContainerStarted","Data":"810672359da461d53da9b78f177056f0914c6668ddf6d0c4f2104909612c633f"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.612137 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hnfjx"] Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.627588 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5ntgp" podStartSLOduration=120.627569673 podStartE2EDuration="2m0.627569673s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:45.626803005 +0000 UTC m=+145.174642455" watchObservedRunningTime="2026-01-09 13:32:45.627569673 +0000 UTC m=+145.175409123" Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.636187 4919 generic.go:334] "Generic (PLEG): container finished" podID="4ff1b886-d642-44c3-ba90-b1b4cb1379dd" containerID="2eb3e9c21360eb31710b2140d1d322507141e670f4da930cf18b6e18cec63479" exitCode=0 Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.636350 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" event={"ID":"4ff1b886-d642-44c3-ba90-b1b4cb1379dd","Type":"ContainerDied","Data":"2eb3e9c21360eb31710b2140d1d322507141e670f4da930cf18b6e18cec63479"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.636877 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-46262"] Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.644078 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss"] Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.667097 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" event={"ID":"6c544528-982d-44c6-bdb9-9fde7a83be80","Type":"ContainerStarted","Data":"ac0efd0b644a13e407fef5a84a94507ee8c0575c4743b63ecbd63db29812f2e4"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.667923 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nlnnr" podStartSLOduration=5.667902262 podStartE2EDuration="5.667902262s" podCreationTimestamp="2026-01-09 13:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:45.666677872 +0000 UTC m=+145.214517322" watchObservedRunningTime="2026-01-09 13:32:45.667902262 +0000 UTC m=+145.215741712" Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.672024 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.674108 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.174077012 +0000 UTC m=+145.721916462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.686693 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" event={"ID":"73189faa-e786-4c46-b23e-c9e58d6b0490","Type":"ContainerStarted","Data":"3249b98b168e973aad8ec9104a12386c8844add8797ecb9ade2637b6148086e4"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.689568 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" event={"ID":"68296f23-034f-4dfd-bb8d-879beafa7ad0","Type":"ContainerStarted","Data":"20c1b43136a2e38ca21169374a2228734e56b9c6f5ea4eab5214464624281d89"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.692135 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" event={"ID":"2797691b-7fdf-450b-a02f-429298cf2a70","Type":"ContainerStarted","Data":"64252e3879a04eb086b8aabc5b55193b360623bb244dc9169e25e86eb058cb05"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.733604 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" event={"ID":"759c9cb1-8b38-429f-84a1-6a1c02619cf7","Type":"ContainerStarted","Data":"b9e56f3ef55f417d5d1aa6d827a4afd8475ad29fbd55b28bea0414f5446c8ed1"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.736153 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" event={"ID":"b26729a1-f6f4-44c6-9d39-b5b5e64104bc","Type":"ContainerStarted","Data":"7a635728ccfaae6eb17850904e1f5519e0ecf96d135151603d854f9ec6e43a1b"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.742567 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" event={"ID":"7b526dd5-1496-4542-aecb-c908662ef696","Type":"ContainerStarted","Data":"bea60f46ef4e3311786f205ce0228c412c3d39262a449c8013ddeaa3edf14be7"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.753795 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" event={"ID":"306b1c8e-d6e2-45e3-8444-5150e5a7d346","Type":"ContainerStarted","Data":"324bfcc8ea1a0dd0d4e00fbe237acb8b15067024f8d3a4ab246ad1e635729acc"} Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.753837 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sjvr2" Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.773072 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.777726 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.277713557 +0000 UTC m=+145.825553007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.780933 4919 patch_prober.go:28] interesting pod/downloads-7954f5f757-sjvr2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.780977 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sjvr2" podUID="077f68de-b2f6-4bbb-8702-81523f9dc7ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.876122 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:45 crc kubenswrapper[4919]: E0109 13:32:45.876393 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.376378381 +0000 UTC m=+145.924217831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:45 crc kubenswrapper[4919]: I0109 13:32:45.986924 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.016631 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.516590774 +0000 UTC m=+146.064430224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.095771 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.095922 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.595894748 +0000 UTC m=+146.143734198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.096285 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.096789 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.59677903 +0000 UTC m=+146.144618480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.197489 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.198126 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.698109879 +0000 UTC m=+146.245949329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.304562 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.305287 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.805273459 +0000 UTC m=+146.353112909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.409815 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.410175 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:46.910152484 +0000 UTC m=+146.457991934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.443414 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" podStartSLOduration=121.443397581 podStartE2EDuration="2m1.443397581s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:46.432114607 +0000 UTC m=+145.979954057" watchObservedRunningTime="2026-01-09 13:32:46.443397581 +0000 UTC m=+145.991237031" Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.505569 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sd2mk" podStartSLOduration=121.505525869 podStartE2EDuration="2m1.505525869s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:46.463882848 +0000 UTC m=+146.011722308" watchObservedRunningTime="2026-01-09 13:32:46.505525869 +0000 UTC m=+146.053365319" Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.517860 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.518478 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.018462713 +0000 UTC m=+146.566302163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.550531 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" podStartSLOduration=121.550511431 podStartE2EDuration="2m1.550511431s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:46.510711285 +0000 UTC m=+146.058550735" watchObservedRunningTime="2026-01-09 13:32:46.550511431 +0000 UTC m=+146.098350881" Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.574759 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" podStartSLOduration=121.574740569 podStartE2EDuration="2m1.574740569s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:46.573779765 +0000 UTC m=+146.121619215" watchObservedRunningTime="2026-01-09 13:32:46.574740569 +0000 UTC m=+146.122580019" Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.618845 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.619018 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.118992352 +0000 UTC m=+146.666831802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.619803 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.620333 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.120315815 +0000 UTC m=+146.668155265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.723724 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.724182 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.224166405 +0000 UTC m=+146.772005855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.792270 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" event={"ID":"306b1c8e-d6e2-45e3-8444-5150e5a7d346","Type":"ContainerStarted","Data":"770f348a90c898a02f2bf685efa46b95489c8a6370f51a915ebfd00f6303f040"} Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.804856 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" event={"ID":"8f61ba90-6fa6-4eb4-a496-d05c70940365","Type":"ContainerStarted","Data":"4a838d29d2a59eb28a251a2fe022c6b4d855e59037d668822601c91f738ebc6c"} Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.804924 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" event={"ID":"8f61ba90-6fa6-4eb4-a496-d05c70940365","Type":"ContainerStarted","Data":"832b6d27129bb025afdab5d161264687839d4062e440d44d47ff46e34d2fc5dc"} Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.809501 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" event={"ID":"607f4472-6658-48ef-ba52-4b6b097eaa2e","Type":"ContainerStarted","Data":"24a4cc1664f94dac46c1fdff979b2d16a4d15968cd735a7bd07c70d5deac7ca4"} Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.816736 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" event={"ID":"8c3f993d-59c9-444b-9882-cedb07c01c7a","Type":"ContainerStarted","Data":"06bf366d91e8558ad567607e05452a46b72378fd4b0428723093351eebe73fae"} Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.825950 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.847931 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.347911838 +0000 UTC m=+146.895751288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.892778 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" event={"ID":"1ab58f80-d33a-4525-8c70-916d566b2521","Type":"ContainerStarted","Data":"86ba034dc14e2e95b691bbbfa4a31d34b283e5fa419d2dfd228fb898ed5e9af0"} Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.926838 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:46 crc kubenswrapper[4919]: E0109 13:32:46.927975 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.4279577 +0000 UTC m=+146.975797150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.937970 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" event={"ID":"e5975f85-ddfb-4c96-bdc8-da5b3541a769","Type":"ContainerStarted","Data":"2e5f2bbac9c1a41d5c5e3df9fc109c038b9d971dbc5e1f3c6fc04e1c7b3886da"} Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.959544 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" event={"ID":"d9739473-727c-4d34-8083-7a5bccb26be6","Type":"ContainerStarted","Data":"88d7704ba188ea05867ee8238195eaac90c6a7f4ca48f2cabdbb7c7b6b3aaa2b"} Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.989938 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" event={"ID":"8446e162-cf3d-4afd-8dfc-92b5b6d66d64","Type":"ContainerStarted","Data":"fe46a131eb56f15a7ee0c91a5719fb9e08d5f43c354b05acd8a60ee31369e925"} Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.995671 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" podStartSLOduration=121.995651933 podStartE2EDuration="2m1.995651933s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:46.852993201 +0000 UTC m=+146.400832661" watchObservedRunningTime="2026-01-09 13:32:46.995651933 +0000 UTC m=+146.543491383" Jan 09 13:32:46 crc kubenswrapper[4919]: I0109 13:32:46.997646 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2tpnt" podStartSLOduration=121.997639491 podStartE2EDuration="2m1.997639491s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:46.995813497 +0000 UTC m=+146.543652947" watchObservedRunningTime="2026-01-09 13:32:46.997639491 +0000 UTC m=+146.545478941" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.001151 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" event={"ID":"b26729a1-f6f4-44c6-9d39-b5b5e64104bc","Type":"ContainerStarted","Data":"0ed09319072508c435756a16e11ab56e14abe4c2ce8f19a9a0e8ca63ec1db052"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.018651 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" event={"ID":"b85eabc9-9f0c-45f7-941f-e329f3022b74","Type":"ContainerStarted","Data":"2d60dea4dd06b0e85b12c06bb69e104eccffb47c4658de08de5feb284c5daf50"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.026499 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" event={"ID":"bdd22a6f-64ba-4cc7-9cb0-8e62250a9001","Type":"ContainerStarted","Data":"6a008a8df2224701aee2b017130d331fce43f51c7b6e24023530b84813eeedd6"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.028302 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.028840 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.528825368 +0000 UTC m=+147.076664818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.030084 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" event={"ID":"ba7db551-cd6a-4d50-98a5-2d532f893e7a","Type":"ContainerStarted","Data":"9659a8b4c80165b75d33ac32ad3b265b29f7ca7d5303610430cfbe3b769056be"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.030143 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" event={"ID":"ba7db551-cd6a-4d50-98a5-2d532f893e7a","Type":"ContainerStarted","Data":"710db9a6952c76c46f2a50869b3b182d54d66916cc160716439413ee3a0aaeff"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.051178 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jx754" event={"ID":"0ea35295-83c1-498b-b190-7dad56fe323b","Type":"ContainerStarted","Data":"582a4025ecf7056a50ddf94f1d7c5f6ff4ea5f0e0ad069284405d3ae0020c16a"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.092043 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" event={"ID":"6c544528-982d-44c6-bdb9-9fde7a83be80","Type":"ContainerStarted","Data":"979ec1e7cfcad95806ff22b754e5bbd828d633af4bf77272dbd84403e92f5391"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.094000 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-t7d9m" podStartSLOduration=122.093976569 podStartE2EDuration="2m2.093976569s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.091862608 +0000 UTC m=+146.639702058" watchObservedRunningTime="2026-01-09 13:32:47.093976569 +0000 UTC m=+146.641816019" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.114041 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" event={"ID":"5b27b30e-8a1e-4c12-ad5a-530c640bf23d","Type":"ContainerStarted","Data":"10cd0a3a5557b5dd3bdc323919afd2d53777e32a554cbdfa99566479989532e2"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.125119 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46262" event={"ID":"a42bf9ad-8478-4f7b-93aa-623be932ba47","Type":"ContainerStarted","Data":"dd4cd1a9d4cf5db9ab040e8c93f34bcc473b118265fd4a80ecab004147e10653"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.129038 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.130277 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.63026013 +0000 UTC m=+147.178099580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.153451 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jx754" podStartSLOduration=122.153436522 podStartE2EDuration="2m2.153436522s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.151687059 +0000 UTC m=+146.699526509" watchObservedRunningTime="2026-01-09 13:32:47.153436522 +0000 UTC m=+146.701275972" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.205989 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" event={"ID":"856b0cf5-5731-4842-be1e-b25bb6426674","Type":"ContainerStarted","Data":"a0632b64384bbbe7e4b90c83352992059c3f702312a2d4801692ab99f658aea4"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.233773 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ttlpm" event={"ID":"58d19460-8b4f-467d-9bc8-f591dd79992c","Type":"ContainerStarted","Data":"488ffe8ebeb87417115751e7e08535079b952ae13fb5a9a9f14dc96ee39ea37d"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.239466 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.241764 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.741744845 +0000 UTC m=+147.289584295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.243832 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" event={"ID":"68296f23-034f-4dfd-bb8d-879beafa7ad0","Type":"ContainerStarted","Data":"91163cb0316f3297fef4f489e566204d009014949dc1c449e97efaf64c4721f0"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.263287 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bffts" event={"ID":"58013dad-1347-4da5-8314-495388d1b5c2","Type":"ContainerStarted","Data":"7bb01366729a4aa01c36225ea7d6284c32529ecc101133a631ad815401aba2bb"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.263338 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bffts" event={"ID":"58013dad-1347-4da5-8314-495388d1b5c2","Type":"ContainerStarted","Data":"aa73b81ba41d1482ba8b767364c142acf269a7e2994acd3b43233557e937a53a"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.301007 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w2b9q" podStartSLOduration=122.300985753 podStartE2EDuration="2m2.300985753s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.207098964 +0000 UTC m=+146.754938414" watchObservedRunningTime="2026-01-09 13:32:47.300985753 +0000 UTC m=+146.848825193" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.301185 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nmjx4" podStartSLOduration=122.301181907 podStartE2EDuration="2m2.301181907s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.299171418 +0000 UTC m=+146.847010858" watchObservedRunningTime="2026-01-09 13:32:47.301181907 +0000 UTC m=+146.849021357" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.336729 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" event={"ID":"9f7cb04a-b39d-4777-b8d5-8c0741134433","Type":"ContainerStarted","Data":"e670c2bd1d794b64267eef547852148e6a4c4a4215b64799516de7e3dde5533f"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.338117 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.343079 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.344561 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.844539499 +0000 UTC m=+147.392378939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.382418 4919 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dwxcs container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.382474 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" podUID="9f7cb04a-b39d-4777-b8d5-8c0741134433" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.396824 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" event={"ID":"2797691b-7fdf-450b-a02f-429298cf2a70","Type":"ContainerStarted","Data":"6e398cc03dbd43732a7ebac4f6b34f97b676ec6704306ac55b83595e38a24e74"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.397846 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.424658 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" event={"ID":"34f52914-d7ad-4273-870a-d1be6c03b766","Type":"ContainerStarted","Data":"5fb009f214e7f6afeaef099d64af459f7e29853cf7c4baf52280b1f31c5d737e"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.438179 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hnfjx" event={"ID":"747dd3ff-596b-48dd-a419-43c73dad5bfb","Type":"ContainerStarted","Data":"b921a38b236efcb91589343058c44879bdbaeb40545d5eedff92c1f1e729a5b8"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.446124 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.448825 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:47.9488105 +0000 UTC m=+147.496649950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.453410 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" event={"ID":"7b526dd5-1496-4542-aecb-c908662ef696","Type":"ContainerStarted","Data":"ef578dcab9f5a5f28fe3910d200d0425c95b232d0d6a3443f7a75637065f9f5a"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.458205 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" event={"ID":"73189faa-e786-4c46-b23e-c9e58d6b0490","Type":"ContainerStarted","Data":"6af12d39978e65133996e8830703053cf3030a75009a5d4ecc618a55f1a16dfd"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.462142 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.472828 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" event={"ID":"69e82214-7a8c-4501-afc0-1f7e9d090bcb","Type":"ContainerStarted","Data":"f23e7c854614751ac67be6c33baeee30c6870a8f58becd9f9bd9cba107204a96"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.472878 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" event={"ID":"69e82214-7a8c-4501-afc0-1f7e9d090bcb","Type":"ContainerStarted","Data":"186eefd869cf365a42f9cbf52161d8042f5c526499b53247e201068d3bf9aae1"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.493266 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s6h2" podStartSLOduration=122.493247108 podStartE2EDuration="2m2.493247108s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.490913302 +0000 UTC m=+147.038752752" watchObservedRunningTime="2026-01-09 13:32:47.493247108 +0000 UTC m=+147.041086558" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.502900 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vplw6" event={"ID":"82674f62-f752-4e34-85e4-fc0678f6aca9","Type":"ContainerStarted","Data":"da8d987217893a0881db9d036ed692a337687b09ba016c5d670a10333605b2a6"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.508699 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-bffts" podStartSLOduration=122.508675843 podStartE2EDuration="2m2.508675843s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.397830213 +0000 UTC m=+146.945669663" watchObservedRunningTime="2026-01-09 13:32:47.508675843 +0000 UTC m=+147.056515293" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.519616 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" event={"ID":"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403","Type":"ContainerStarted","Data":"e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.520711 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.548602 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.548865 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.048835037 +0000 UTC m=+147.596674487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.549389 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.551408 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.051394719 +0000 UTC m=+147.599234169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.559025 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" event={"ID":"3f44210f-5f93-426b-852f-1fc6f0e4deb7","Type":"ContainerStarted","Data":"aa1cc3df7edf891a3cbf744720e6cd46eb9578ba86ba4fa6c0b4acb39db1a363"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.560191 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.561487 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" podStartSLOduration=122.561475004 podStartE2EDuration="2m2.561475004s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.559549037 +0000 UTC m=+147.107388487" watchObservedRunningTime="2026-01-09 13:32:47.561475004 +0000 UTC m=+147.109314454" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.592592 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66425" event={"ID":"73f4afd2-691f-4749-b361-d99c9482a35b","Type":"ContainerStarted","Data":"e1a0c27c14757895b9f45718d0f9ff65a5755adbd5a99ea7fb4ae689244a039d"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.592639 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66425" event={"ID":"73f4afd2-691f-4749-b361-d99c9482a35b","Type":"ContainerStarted","Data":"7df54f6227f15b52f9e4267ec772b2578bd1504091bc88e0429ee94bd0f69e66"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.593475 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.594281 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.609580 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.613301 4919 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-66425 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.613356 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-66425" podUID="73f4afd2-691f-4749-b361-d99c9482a35b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.624342 4919 patch_prober.go:28] interesting pod/router-default-5444994796-jx754 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 13:32:47 crc kubenswrapper[4919]: [-]has-synced failed: reason withheld Jan 09 13:32:47 crc kubenswrapper[4919]: [+]process-running ok Jan 09 13:32:47 crc kubenswrapper[4919]: healthz check failed Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.624400 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jx754" podUID="0ea35295-83c1-498b-b190-7dad56fe323b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.650129 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dpvdp" podStartSLOduration=122.650103105 podStartE2EDuration="2m2.650103105s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.644375626 +0000 UTC m=+147.192215076" watchObservedRunningTime="2026-01-09 13:32:47.650103105 +0000 UTC m=+147.197942555" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.651015 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.652560 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.152529164 +0000 UTC m=+147.700368614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.762118 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.769168 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.269149794 +0000 UTC m=+147.816989244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.776665 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.777412 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g2956" event={"ID":"e5c6a4ca-0f89-4286-b0f3-c67039d9d8a8","Type":"ContainerStarted","Data":"04b7d835d3c18a4d68986271b97225059c7be31c2673aca5d64acd715be0801c"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.770185 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-l22qw" podStartSLOduration=122.770165468 podStartE2EDuration="2m2.770165468s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.702020184 +0000 UTC m=+147.249859634" watchObservedRunningTime="2026-01-09 13:32:47.770165468 +0000 UTC m=+147.318004918" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.840198 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bln95" event={"ID":"759c9cb1-8b38-429f-84a1-6a1c02619cf7","Type":"ContainerStarted","Data":"34b36812b8992b79f6c7d834d3d94e375de05526f78ed2d685c30e2cbed36e1e"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.849821 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" podStartSLOduration=122.849798751 podStartE2EDuration="2m2.849798751s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.803465516 +0000 UTC m=+147.351304976" watchObservedRunningTime="2026-01-09 13:32:47.849798751 +0000 UTC m=+147.397638201" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.872837 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.874755 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.374733096 +0000 UTC m=+147.922572546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.891417 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhxrb" podStartSLOduration=122.89139362 podStartE2EDuration="2m2.89139362s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.871906237 +0000 UTC m=+147.419745687" watchObservedRunningTime="2026-01-09 13:32:47.89139362 +0000 UTC m=+147.439233070" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.944719 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7lrzs" podStartSLOduration=122.944699494 podStartE2EDuration="2m2.944699494s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.94372506 +0000 UTC m=+147.491564510" watchObservedRunningTime="2026-01-09 13:32:47.944699494 +0000 UTC m=+147.492538944" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.945079 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4mjm" podStartSLOduration=122.945074363 podStartE2EDuration="2m2.945074363s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:47.915611228 +0000 UTC m=+147.463450678" watchObservedRunningTime="2026-01-09 13:32:47.945074363 +0000 UTC m=+147.492913813" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.954044 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" event={"ID":"54c29b9f-4240-4edd-98aa-cd053a66000e","Type":"ContainerStarted","Data":"c2aba8a906645e5f8aed917b28680b45e9099623bee1795f6930907bd2ad7f17"} Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.955392 4919 patch_prober.go:28] interesting pod/downloads-7954f5f757-sjvr2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.958512 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sjvr2" podUID="077f68de-b2f6-4bbb-8702-81523f9dc7ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 09 13:32:47 crc kubenswrapper[4919]: I0109 13:32:47.986204 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:47 crc kubenswrapper[4919]: E0109 13:32:47.999471 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.499455762 +0000 UTC m=+148.047295212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.070307 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vb6hf" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.087057 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-66425" podStartSLOduration=123.087038728 podStartE2EDuration="2m3.087038728s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:48.085785907 +0000 UTC m=+147.633625357" watchObservedRunningTime="2026-01-09 13:32:48.087038728 +0000 UTC m=+147.634878178" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.087677 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.088138 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.588121494 +0000 UTC m=+148.135960944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.188858 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.189204 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.689188216 +0000 UTC m=+148.237027666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.289842 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.290656 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.790638938 +0000 UTC m=+148.338478388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.392225 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.392606 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.892593373 +0000 UTC m=+148.440432813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.493154 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.493329 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.993301686 +0000 UTC m=+148.541141136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.493649 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.493934 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:48.993927712 +0000 UTC m=+148.541767162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.591897 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tf6wk"] Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.593743 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.600529 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.600560 4919 patch_prober.go:28] interesting pod/router-default-5444994796-jx754 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 13:32:48 crc kubenswrapper[4919]: [-]has-synced failed: reason withheld Jan 09 13:32:48 crc kubenswrapper[4919]: [+]process-running ok Jan 09 13:32:48 crc kubenswrapper[4919]: healthz check failed Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.600639 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jx754" podUID="0ea35295-83c1-498b-b190-7dad56fe323b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.600801 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:49.100672241 +0000 UTC m=+148.648511691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.600870 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.601085 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-catalog-content\") pod \"certified-operators-tf6wk\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.601141 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-utilities\") pod \"certified-operators-tf6wk\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.601173 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw6nc\" (UniqueName: \"kubernetes.io/projected/18b90207-0827-4db3-b0ca-e622b58ed504-kube-api-access-bw6nc\") pod \"certified-operators-tf6wk\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.601279 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:49.101261965 +0000 UTC m=+148.649101475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.607040 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.631286 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tf6wk"] Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.702578 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.702756 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-catalog-content\") pod \"certified-operators-tf6wk\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.702782 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-utilities\") pod \"certified-operators-tf6wk\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.702806 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw6nc\" (UniqueName: \"kubernetes.io/projected/18b90207-0827-4db3-b0ca-e622b58ed504-kube-api-access-bw6nc\") pod \"certified-operators-tf6wk\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.703162 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:49.203147228 +0000 UTC m=+148.750986668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.703860 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-catalog-content\") pod \"certified-operators-tf6wk\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.704086 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-utilities\") pod \"certified-operators-tf6wk\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.739041 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw6nc\" (UniqueName: \"kubernetes.io/projected/18b90207-0827-4db3-b0ca-e622b58ed504-kube-api-access-bw6nc\") pod \"certified-operators-tf6wk\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.766937 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xvr9v"] Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.767856 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.790641 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.794646 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvr9v"] Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.805969 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.806023 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-utilities\") pod \"community-operators-xvr9v\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.806043 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x6wj\" (UniqueName: \"kubernetes.io/projected/691c6d86-b150-4576-872d-004862dcbd22-kube-api-access-4x6wj\") pod \"community-operators-xvr9v\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.806080 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-catalog-content\") pod \"community-operators-xvr9v\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.806409 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:49.306396363 +0000 UTC m=+148.854235803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.909837 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.910449 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-utilities\") pod \"community-operators-xvr9v\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.910480 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x6wj\" (UniqueName: \"kubernetes.io/projected/691c6d86-b150-4576-872d-004862dcbd22-kube-api-access-4x6wj\") pod \"community-operators-xvr9v\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.910517 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-catalog-content\") pod \"community-operators-xvr9v\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:48 crc kubenswrapper[4919]: E0109 13:32:48.910793 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:49.410758126 +0000 UTC m=+148.958597606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.910969 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-catalog-content\") pod \"community-operators-xvr9v\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.911052 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-utilities\") pod \"community-operators-xvr9v\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.914547 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.937359 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x6wj\" (UniqueName: \"kubernetes.io/projected/691c6d86-b150-4576-872d-004862dcbd22-kube-api-access-4x6wj\") pod \"community-operators-xvr9v\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.958826 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xppnp"] Jan 09 13:32:48 crc kubenswrapper[4919]: I0109 13:32:48.983196 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.007439 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xppnp"] Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.014365 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7pl\" (UniqueName: \"kubernetes.io/projected/a7ddc148-0c1a-496f-b58b-c88f30af7344-kube-api-access-zd7pl\") pod \"certified-operators-xppnp\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.014422 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-utilities\") pod \"certified-operators-xppnp\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.014476 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-catalog-content\") pod \"certified-operators-xppnp\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.014502 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:49 crc kubenswrapper[4919]: E0109 13:32:49.014792 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:49.51477973 +0000 UTC m=+149.062619180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.022464 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ttlpm" event={"ID":"58d19460-8b4f-467d-9bc8-f591dd79992c","Type":"ContainerStarted","Data":"af8f17f23ffca62ddd4ae87c47923d00c64928f0d27f669f26e0ffbb80bce2e8"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.023315 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.043397 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vplw6" event={"ID":"82674f62-f752-4e34-85e4-fc0678f6aca9","Type":"ContainerStarted","Data":"e135f3f5c4e93d50ea30b6308a2eeb68260eec756b8ef10d791b9e59555a4b22"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.043441 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vplw6" event={"ID":"82674f62-f752-4e34-85e4-fc0678f6aca9","Type":"ContainerStarted","Data":"0770eeb7ffca47f9fafdeed78ffeef8296febb75ca15e9c5a866ce31ce813321"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.044068 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.062402 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hnfjx" event={"ID":"747dd3ff-596b-48dd-a419-43c73dad5bfb","Type":"ContainerStarted","Data":"f4ad9674ddc1aae6a32d5a4b92500ecaaf4ad827d90505b5117105342bd35d5f"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.072497 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" event={"ID":"9f7cb04a-b39d-4777-b8d5-8c0741134433","Type":"ContainerStarted","Data":"fde2949010a8d0d23dcb32f915978bb6e6c2fcf35d0767b38bd7c75103356b02"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.089381 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" event={"ID":"54c29b9f-4240-4edd-98aa-cd053a66000e","Type":"ContainerStarted","Data":"346668140e19606fe9c1465f475abdfe47fcfb915adc4c9fd567f42bfbf0b338"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.089423 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" event={"ID":"54c29b9f-4240-4edd-98aa-cd053a66000e","Type":"ContainerStarted","Data":"5dfa3d3d31feb83952e93012d4dc4f3247e37a4e0ca18997f72660c1a2f015cb"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.116802 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.117096 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-catalog-content\") pod \"certified-operators-xppnp\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.117225 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd7pl\" (UniqueName: \"kubernetes.io/projected/a7ddc148-0c1a-496f-b58b-c88f30af7344-kube-api-access-zd7pl\") pod \"certified-operators-xppnp\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.117292 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-utilities\") pod \"certified-operators-xppnp\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: E0109 13:32:49.118579 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:49.618562159 +0000 UTC m=+149.166401609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.119472 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-catalog-content\") pod \"certified-operators-xppnp\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.120157 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-utilities\") pod \"certified-operators-xppnp\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.127610 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" event={"ID":"306b1c8e-d6e2-45e3-8444-5150e5a7d346","Type":"ContainerStarted","Data":"b6d5fc3bc0966bb72e4a120b5791083b309a835c6e5c164709c7486fc6818d5c"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.127873 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ttlpm" podStartSLOduration=124.127852304 podStartE2EDuration="2m4.127852304s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.07163027 +0000 UTC m=+148.619469720" watchObservedRunningTime="2026-01-09 13:32:49.127852304 +0000 UTC m=+148.675691754" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.129430 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hnfjx" podStartSLOduration=9.129422512 podStartE2EDuration="9.129422512s" podCreationTimestamp="2026-01-09 13:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.128352856 +0000 UTC m=+148.676192306" watchObservedRunningTime="2026-01-09 13:32:49.129422512 +0000 UTC m=+148.677261962" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.139403 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.145386 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" event={"ID":"8c3f993d-59c9-444b-9882-cedb07c01c7a","Type":"ContainerStarted","Data":"9726e9eee7703ac50b2c6cc82874afa5de3794a3663471f10d996033d6231e2f"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.145468 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.154727 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-vplw6" podStartSLOduration=9.154713476 podStartE2EDuration="9.154713476s" podCreationTimestamp="2026-01-09 13:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.15408196 +0000 UTC m=+148.701921410" watchObservedRunningTime="2026-01-09 13:32:49.154713476 +0000 UTC m=+148.702552926" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.167395 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" event={"ID":"ba7db551-cd6a-4d50-98a5-2d532f893e7a","Type":"ContainerStarted","Data":"677bc051f9215f4284cb9c27721f32f49c9397d74361ef0f812abfd979e4134c"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.168035 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.201144 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" event={"ID":"5b27b30e-8a1e-4c12-ad5a-530c640bf23d","Type":"ContainerStarted","Data":"3bd59f12487cefe03d31668c3c9ef59943b0befc507441f895d664cc8f59de30"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.203248 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-69xx2"] Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.204339 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.206529 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6h2sh" podStartSLOduration=124.206518803 podStartE2EDuration="2m4.206518803s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.199443101 +0000 UTC m=+148.747282551" watchObservedRunningTime="2026-01-09 13:32:49.206518803 +0000 UTC m=+148.754358253" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.211047 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69xx2"] Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.212042 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" event={"ID":"4ff1b886-d642-44c3-ba90-b1b4cb1379dd","Type":"ContainerStarted","Data":"4d581578a07dd5b81d03de9c96cdeb296c1519c45ed4130a24072bcb35d7361c"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.216837 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd7pl\" (UniqueName: \"kubernetes.io/projected/a7ddc148-0c1a-496f-b58b-c88f30af7344-kube-api-access-zd7pl\") pod \"certified-operators-xppnp\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.218789 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.218856 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-utilities\") pod \"community-operators-69xx2\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.221063 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-catalog-content\") pod \"community-operators-69xx2\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: E0109 13:32:49.221115 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:49.721088986 +0000 UTC m=+149.268928436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.221389 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqb7t\" (UniqueName: \"kubernetes.io/projected/11f13609-8588-44c4-b426-db71e94e93dd-kube-api-access-pqb7t\") pod \"community-operators-69xx2\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.268652 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" event={"ID":"b26729a1-f6f4-44c6-9d39-b5b5e64104bc","Type":"ContainerStarted","Data":"1459e6a4bd50b2a53455a0f3fe958197695fa4c21b2a53cd6af14571eed92e50"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.292680 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" event={"ID":"b85eabc9-9f0c-45f7-941f-e329f3022b74","Type":"ContainerStarted","Data":"5fdcfc93b28d4d86de4ef2d197690fffa65d81d66ce727bf8b660f25194f0c0d"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.292729 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" event={"ID":"b85eabc9-9f0c-45f7-941f-e329f3022b74","Type":"ContainerStarted","Data":"5b67287c32f7c96e3c3d9dde9c569983256f8cb7611e0eecdb921eadbe8dbc47"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.305937 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.309312 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mv7fj" podStartSLOduration=124.309302357 podStartE2EDuration="2m4.309302357s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.254937828 +0000 UTC m=+148.802777278" watchObservedRunningTime="2026-01-09 13:32:49.309302357 +0000 UTC m=+148.857141807" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.327895 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.328048 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46262" event={"ID":"a42bf9ad-8478-4f7b-93aa-623be932ba47","Type":"ContainerStarted","Data":"575ddde90182c8d5688379dd3e09f708bb3ac9d1089e625afc775a85cea7e48a"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.328083 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46262" event={"ID":"a42bf9ad-8478-4f7b-93aa-623be932ba47","Type":"ContainerStarted","Data":"080a8271a15cfe73b15f5724cfaf3904e09829200abf509c0a072dfeb85b98f2"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.328276 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-utilities\") pod \"community-operators-69xx2\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.328355 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-catalog-content\") pod \"community-operators-69xx2\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.328470 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqb7t\" (UniqueName: \"kubernetes.io/projected/11f13609-8588-44c4-b426-db71e94e93dd-kube-api-access-pqb7t\") pod \"community-operators-69xx2\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.329347 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-catalog-content\") pod \"community-operators-69xx2\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: E0109 13:32:49.329747 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:49.829731803 +0000 UTC m=+149.377571253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.330264 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-utilities\") pod \"community-operators-69xx2\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.337526 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" event={"ID":"e5975f85-ddfb-4c96-bdc8-da5b3541a769","Type":"ContainerStarted","Data":"99b5082ff4ac0ad5b22b98ca1a22b921b6d935758ddc103103fa118138ab43d7"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.347253 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" event={"ID":"8446e162-cf3d-4afd-8dfc-92b5b6d66d64","Type":"ContainerStarted","Data":"a6220adcf9bb60ac30e6902f62a7795bd3344588dcebeff5e377ab461738cea0"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.353679 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" podStartSLOduration=124.353660194 podStartE2EDuration="2m4.353660194s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.312342461 +0000 UTC m=+148.860181911" watchObservedRunningTime="2026-01-09 13:32:49.353660194 +0000 UTC m=+148.901499644" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.354302 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h5zhd" podStartSLOduration=124.354297499 podStartE2EDuration="2m4.354297499s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.351568383 +0000 UTC m=+148.899407843" watchObservedRunningTime="2026-01-09 13:32:49.354297499 +0000 UTC m=+148.902136949" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.354941 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" event={"ID":"8f61ba90-6fa6-4eb4-a496-d05c70940365","Type":"ContainerStarted","Data":"2e00332022c878797a96bd9638096f2bbedb79ed1a47744a17e9bbf514a00307"} Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.356504 4919 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-66425 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.356538 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-66425" podUID="73f4afd2-691f-4749-b361-d99c9482a35b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.382033 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqb7t\" (UniqueName: \"kubernetes.io/projected/11f13609-8588-44c4-b426-db71e94e93dd-kube-api-access-pqb7t\") pod \"community-operators-69xx2\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.401828 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2ztc2" podStartSLOduration=124.401806082 podStartE2EDuration="2m4.401806082s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.399187858 +0000 UTC m=+148.947027308" watchObservedRunningTime="2026-01-09 13:32:49.401806082 +0000 UTC m=+148.949645532" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.434706 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:49 crc kubenswrapper[4919]: E0109 13:32:49.474978 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:49.974960747 +0000 UTC m=+149.522800197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.481118 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" podStartSLOduration=124.481091546 podStartE2EDuration="2m4.481091546s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.432421785 +0000 UTC m=+148.980261235" watchObservedRunningTime="2026-01-09 13:32:49.481091546 +0000 UTC m=+149.028930996" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.495127 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-twpss" podStartSLOduration=124.495064935 podStartE2EDuration="2m4.495064935s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.482780627 +0000 UTC m=+149.030620077" watchObservedRunningTime="2026-01-09 13:32:49.495064935 +0000 UTC m=+149.042904375" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.517134 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" podStartSLOduration=124.51710725 podStartE2EDuration="2m4.51710725s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.51092483 +0000 UTC m=+149.058764270" watchObservedRunningTime="2026-01-09 13:32:49.51710725 +0000 UTC m=+149.064946700" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.538589 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:49 crc kubenswrapper[4919]: E0109 13:32:49.539021 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.038997591 +0000 UTC m=+149.586837041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.565874 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46262" podStartSLOduration=124.565855863 podStartE2EDuration="2m4.565855863s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.565007352 +0000 UTC m=+149.112846802" watchObservedRunningTime="2026-01-09 13:32:49.565855863 +0000 UTC m=+149.113695313" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.569274 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.599908 4919 patch_prober.go:28] interesting pod/router-default-5444994796-jx754 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 13:32:49 crc kubenswrapper[4919]: [-]has-synced failed: reason withheld Jan 09 13:32:49 crc kubenswrapper[4919]: [+]process-running ok Jan 09 13:32:49 crc kubenswrapper[4919]: healthz check failed Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.600138 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jx754" podUID="0ea35295-83c1-498b-b190-7dad56fe323b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.607269 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hqds7" podStartSLOduration=124.607254838 podStartE2EDuration="2m4.607254838s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.605885234 +0000 UTC m=+149.153724704" watchObservedRunningTime="2026-01-09 13:32:49.607254838 +0000 UTC m=+149.155094288" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.641124 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:49 crc kubenswrapper[4919]: E0109 13:32:49.641634 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.141620012 +0000 UTC m=+149.689459462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.647335 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ttlpm" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.671635 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.676332 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" podStartSLOduration=124.676307223 podStartE2EDuration="2m4.676307223s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.666797953 +0000 UTC m=+149.214637403" watchObservedRunningTime="2026-01-09 13:32:49.676307223 +0000 UTC m=+149.224146673" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.690734 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tf6wk"] Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.728476 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cb8tr" podStartSLOduration=124.728416008 podStartE2EDuration="2m4.728416008s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:49.727633829 +0000 UTC m=+149.275473309" watchObservedRunningTime="2026-01-09 13:32:49.728416008 +0000 UTC m=+149.276255448" Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.743661 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:49 crc kubenswrapper[4919]: E0109 13:32:49.744274 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.244256742 +0000 UTC m=+149.792096192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.823283 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvr9v"] Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.847528 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:49 crc kubenswrapper[4919]: E0109 13:32:49.847943 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.347929028 +0000 UTC m=+149.895768468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.949172 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:49 crc kubenswrapper[4919]: E0109 13:32:49.949886 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.449869592 +0000 UTC m=+149.997709042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:49 crc kubenswrapper[4919]: I0109 13:32:49.963945 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xppnp"] Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.003801 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dwxcs" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.064241 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:50 crc kubenswrapper[4919]: E0109 13:32:50.064541 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.564528264 +0000 UTC m=+150.112367714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.167055 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:50 crc kubenswrapper[4919]: E0109 13:32:50.167263 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.667233227 +0000 UTC m=+150.215072677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.167712 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:50 crc kubenswrapper[4919]: E0109 13:32:50.168022 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.668007736 +0000 UTC m=+150.215847186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.218809 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69xx2"] Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.268392 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:50 crc kubenswrapper[4919]: E0109 13:32:50.269541 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.769511989 +0000 UTC m=+150.317351439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.269768 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:50 crc kubenswrapper[4919]: E0109 13:32:50.270183 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.770174495 +0000 UTC m=+150.318013945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.371817 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:50 crc kubenswrapper[4919]: E0109 13:32:50.372145 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.872128389 +0000 UTC m=+150.419967839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.372236 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:50 crc kubenswrapper[4919]: E0109 13:32:50.372525 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.872518588 +0000 UTC m=+150.420358038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.377504 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xx2" event={"ID":"11f13609-8588-44c4-b426-db71e94e93dd","Type":"ContainerStarted","Data":"a4d3d8fb93de41510657ba41e306ff1f3e9c2648e5bb666cb9a9720f586d39ec"} Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.382314 4919 generic.go:334] "Generic (PLEG): container finished" podID="691c6d86-b150-4576-872d-004862dcbd22" containerID="44becb4d954ccf4f665c325cc948283db62c12647e6e12814d994579541fe866" exitCode=0 Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.383898 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvr9v" event={"ID":"691c6d86-b150-4576-872d-004862dcbd22","Type":"ContainerDied","Data":"44becb4d954ccf4f665c325cc948283db62c12647e6e12814d994579541fe866"} Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.386157 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvr9v" event={"ID":"691c6d86-b150-4576-872d-004862dcbd22","Type":"ContainerStarted","Data":"68df81d59326066e2f6879ebe673d832e7c6eb2834f4160867b87ecdc5973c27"} Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.389007 4919 generic.go:334] "Generic (PLEG): container finished" podID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerID="3c8b486232c355c0cfbdaea48ccbacb9498cfb7baf0f733a13f25f85ecd1e6f3" exitCode=0 Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.389097 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xppnp" event={"ID":"a7ddc148-0c1a-496f-b58b-c88f30af7344","Type":"ContainerDied","Data":"3c8b486232c355c0cfbdaea48ccbacb9498cfb7baf0f733a13f25f85ecd1e6f3"} Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.389138 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xppnp" event={"ID":"a7ddc148-0c1a-496f-b58b-c88f30af7344","Type":"ContainerStarted","Data":"c6abf321087e9923404c9b0e1c0b27621a378ef1f35c204a159ed07579c5bc6c"} Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.390316 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.394069 4919 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.404334 4919 generic.go:334] "Generic (PLEG): container finished" podID="607f4472-6658-48ef-ba52-4b6b097eaa2e" containerID="24a4cc1664f94dac46c1fdff979b2d16a4d15968cd735a7bd07c70d5deac7ca4" exitCode=0 Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.404457 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" event={"ID":"607f4472-6658-48ef-ba52-4b6b097eaa2e","Type":"ContainerDied","Data":"24a4cc1664f94dac46c1fdff979b2d16a4d15968cd735a7bd07c70d5deac7ca4"} Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.409175 4919 generic.go:334] "Generic (PLEG): container finished" podID="18b90207-0827-4db3-b0ca-e622b58ed504" containerID="cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980" exitCode=0 Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.409282 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tf6wk" event={"ID":"18b90207-0827-4db3-b0ca-e622b58ed504","Type":"ContainerDied","Data":"cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980"} Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.409309 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tf6wk" event={"ID":"18b90207-0827-4db3-b0ca-e622b58ed504","Type":"ContainerStarted","Data":"0081ef18fd339fc3467d7a8da728dd0d69676520a0646cb80dbf43293864d1dc"} Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.413370 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" event={"ID":"1ab58f80-d33a-4525-8c70-916d566b2521","Type":"ContainerStarted","Data":"5328ab3138be731b32fdb6ac63f57b253da6849c822b0b5c66837131d16d5b1e"} Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.427604 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.473783 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:50 crc kubenswrapper[4919]: E0109 13:32:50.473969 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 13:32:50.97394474 +0000 UTC m=+150.521784190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.483112 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.493160 4919 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-09T13:32:50.394091512Z","Handler":null,"Name":""} Jan 09 13:32:50 crc kubenswrapper[4919]: E0109 13:32:50.500095 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 13:32:51.000076984 +0000 UTC m=+150.547916434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttgps" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.511727 4919 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.511772 4919 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.585226 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.591580 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.607434 4919 patch_prober.go:28] interesting pod/router-default-5444994796-jx754 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 13:32:50 crc kubenswrapper[4919]: [-]has-synced failed: reason withheld Jan 09 13:32:50 crc kubenswrapper[4919]: [+]process-running ok Jan 09 13:32:50 crc kubenswrapper[4919]: healthz check failed Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.607498 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jx754" podUID="0ea35295-83c1-498b-b190-7dad56fe323b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.686354 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.689014 4919 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.689053 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.724240 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttgps\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.766413 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.767231 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bj7bg"] Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.771499 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.780061 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.783310 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj7bg"] Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.889759 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-utilities\") pod \"redhat-marketplace-bj7bg\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.889802 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c827\" (UniqueName: \"kubernetes.io/projected/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-kube-api-access-8c827\") pod \"redhat-marketplace-bj7bg\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.889842 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-catalog-content\") pod \"redhat-marketplace-bj7bg\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.958596 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.991411 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-catalog-content\") pod \"redhat-marketplace-bj7bg\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.991588 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-utilities\") pod \"redhat-marketplace-bj7bg\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.991625 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c827\" (UniqueName: \"kubernetes.io/projected/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-kube-api-access-8c827\") pod \"redhat-marketplace-bj7bg\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.993109 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-catalog-content\") pod \"redhat-marketplace-bj7bg\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:50 crc kubenswrapper[4919]: I0109 13:32:50.993433 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-utilities\") pod \"redhat-marketplace-bj7bg\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.024961 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c827\" (UniqueName: \"kubernetes.io/projected/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-kube-api-access-8c827\") pod \"redhat-marketplace-bj7bg\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.141946 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.161338 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dg7pw"] Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.162788 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.175257 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg7pw"] Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.218872 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttgps"] Jan 09 13:32:51 crc kubenswrapper[4919]: W0109 13:32:51.228533 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd283d70b_0dbe_4059_aa3a_f05d029cb3ab.slice/crio-f42c2d8f41ebbc1931edb59d152eea7bc01950371a0654dc7544e54c403c3463 WatchSource:0}: Error finding container f42c2d8f41ebbc1931edb59d152eea7bc01950371a0654dc7544e54c403c3463: Status 404 returned error can't find the container with id f42c2d8f41ebbc1931edb59d152eea7bc01950371a0654dc7544e54c403c3463 Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.246757 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.246843 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.302090 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-utilities\") pod \"redhat-marketplace-dg7pw\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.302173 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-catalog-content\") pod \"redhat-marketplace-dg7pw\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.302191 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsbdz\" (UniqueName: \"kubernetes.io/projected/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-kube-api-access-xsbdz\") pod \"redhat-marketplace-dg7pw\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.398006 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj7bg"] Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.403246 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-utilities\") pod \"redhat-marketplace-dg7pw\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.403321 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-catalog-content\") pod \"redhat-marketplace-dg7pw\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.403344 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsbdz\" (UniqueName: \"kubernetes.io/projected/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-kube-api-access-xsbdz\") pod \"redhat-marketplace-dg7pw\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.404363 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-utilities\") pod \"redhat-marketplace-dg7pw\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.404414 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-catalog-content\") pod \"redhat-marketplace-dg7pw\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: W0109 13:32:51.407407 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bdb482c_0d44_43b3_b74f_d0ba01a861b0.slice/crio-8b5e9b16a497b0be5aabb5f4fb0285fa1d3db691dde692d4733152f7292fe2c9 WatchSource:0}: Error finding container 8b5e9b16a497b0be5aabb5f4fb0285fa1d3db691dde692d4733152f7292fe2c9: Status 404 returned error can't find the container with id 8b5e9b16a497b0be5aabb5f4fb0285fa1d3db691dde692d4733152f7292fe2c9 Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.423040 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsbdz\" (UniqueName: \"kubernetes.io/projected/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-kube-api-access-xsbdz\") pod \"redhat-marketplace-dg7pw\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.434001 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" event={"ID":"1ab58f80-d33a-4525-8c70-916d566b2521","Type":"ContainerStarted","Data":"bb87696965cb84b08d229cf75fcc7627424453f7177659f190d57d1b5f67cfee"} Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.434064 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" event={"ID":"1ab58f80-d33a-4525-8c70-916d566b2521","Type":"ContainerStarted","Data":"b9a84de1dd111fb07b50b950a171d502477d009d2b2353a06608603921452d59"} Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.443024 4919 generic.go:334] "Generic (PLEG): container finished" podID="11f13609-8588-44c4-b426-db71e94e93dd" containerID="c15cf2ae5c5226bce4fe4ede16a8d0e8f89e512de6949bdc1e883ca4a3a02113" exitCode=0 Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.443379 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xx2" event={"ID":"11f13609-8588-44c4-b426-db71e94e93dd","Type":"ContainerDied","Data":"c15cf2ae5c5226bce4fe4ede16a8d0e8f89e512de6949bdc1e883ca4a3a02113"} Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.445967 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj7bg" event={"ID":"3bdb482c-0d44-43b3-b74f-d0ba01a861b0","Type":"ContainerStarted","Data":"8b5e9b16a497b0be5aabb5f4fb0285fa1d3db691dde692d4733152f7292fe2c9"} Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.455371 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" event={"ID":"d283d70b-0dbe-4059-aa3a-f05d029cb3ab","Type":"ContainerStarted","Data":"f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d"} Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.455417 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" event={"ID":"d283d70b-0dbe-4059-aa3a-f05d029cb3ab","Type":"ContainerStarted","Data":"f42c2d8f41ebbc1931edb59d152eea7bc01950371a0654dc7544e54c403c3463"} Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.455440 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.499149 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xwk7t" podStartSLOduration=11.499129928 podStartE2EDuration="11.499129928s" podCreationTimestamp="2026-01-09 13:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:51.456302999 +0000 UTC m=+151.004142449" watchObservedRunningTime="2026-01-09 13:32:51.499129928 +0000 UTC m=+151.046969378" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.535567 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.545727 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" podStartSLOduration=126.545681848 podStartE2EDuration="2m6.545681848s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:51.532338764 +0000 UTC m=+151.080178214" watchObservedRunningTime="2026-01-09 13:32:51.545681848 +0000 UTC m=+151.093521308" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.613309 4919 patch_prober.go:28] interesting pod/router-default-5444994796-jx754 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 13:32:51 crc kubenswrapper[4919]: [-]has-synced failed: reason withheld Jan 09 13:32:51 crc kubenswrapper[4919]: [+]process-running ok Jan 09 13:32:51 crc kubenswrapper[4919]: healthz check failed Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.613378 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jx754" podUID="0ea35295-83c1-498b-b190-7dad56fe323b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.697878 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.750938 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qx45q"] Jan 09 13:32:51 crc kubenswrapper[4919]: E0109 13:32:51.751768 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607f4472-6658-48ef-ba52-4b6b097eaa2e" containerName="collect-profiles" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.751813 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="607f4472-6658-48ef-ba52-4b6b097eaa2e" containerName="collect-profiles" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.751980 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="607f4472-6658-48ef-ba52-4b6b097eaa2e" containerName="collect-profiles" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.752870 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.757730 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.763716 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qx45q"] Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.815750 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbknj\" (UniqueName: \"kubernetes.io/projected/607f4472-6658-48ef-ba52-4b6b097eaa2e-kube-api-access-xbknj\") pod \"607f4472-6658-48ef-ba52-4b6b097eaa2e\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.815856 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/607f4472-6658-48ef-ba52-4b6b097eaa2e-config-volume\") pod \"607f4472-6658-48ef-ba52-4b6b097eaa2e\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.816013 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/607f4472-6658-48ef-ba52-4b6b097eaa2e-secret-volume\") pod \"607f4472-6658-48ef-ba52-4b6b097eaa2e\" (UID: \"607f4472-6658-48ef-ba52-4b6b097eaa2e\") " Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.816283 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-utilities\") pod \"redhat-operators-qx45q\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.816308 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-catalog-content\") pod \"redhat-operators-qx45q\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.816525 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfzzn\" (UniqueName: \"kubernetes.io/projected/1ce56338-b322-46a4-b02c-2ae2b1bb5149-kube-api-access-lfzzn\") pod \"redhat-operators-qx45q\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.820195 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/607f4472-6658-48ef-ba52-4b6b097eaa2e-config-volume" (OuterVolumeSpecName: "config-volume") pod "607f4472-6658-48ef-ba52-4b6b097eaa2e" (UID: "607f4472-6658-48ef-ba52-4b6b097eaa2e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.821660 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607f4472-6658-48ef-ba52-4b6b097eaa2e-kube-api-access-xbknj" (OuterVolumeSpecName: "kube-api-access-xbknj") pod "607f4472-6658-48ef-ba52-4b6b097eaa2e" (UID: "607f4472-6658-48ef-ba52-4b6b097eaa2e"). InnerVolumeSpecName "kube-api-access-xbknj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.828559 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607f4472-6658-48ef-ba52-4b6b097eaa2e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "607f4472-6658-48ef-ba52-4b6b097eaa2e" (UID: "607f4472-6658-48ef-ba52-4b6b097eaa2e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.860000 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg7pw"] Jan 09 13:32:51 crc kubenswrapper[4919]: W0109 13:32:51.896073 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef0b4efa_7cc4_48d3_be0e_7406620f6a84.slice/crio-d1d29207addb0e82cf9e707c93398885a05bc47f25b98de20dc836cfd42c34ab WatchSource:0}: Error finding container d1d29207addb0e82cf9e707c93398885a05bc47f25b98de20dc836cfd42c34ab: Status 404 returned error can't find the container with id d1d29207addb0e82cf9e707c93398885a05bc47f25b98de20dc836cfd42c34ab Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.918599 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfzzn\" (UniqueName: \"kubernetes.io/projected/1ce56338-b322-46a4-b02c-2ae2b1bb5149-kube-api-access-lfzzn\") pod \"redhat-operators-qx45q\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.919422 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-utilities\") pod \"redhat-operators-qx45q\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.920370 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-catalog-content\") pod \"redhat-operators-qx45q\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.920683 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-utilities\") pod \"redhat-operators-qx45q\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.920733 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-catalog-content\") pod \"redhat-operators-qx45q\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.920883 4919 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/607f4472-6658-48ef-ba52-4b6b097eaa2e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.920919 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbknj\" (UniqueName: \"kubernetes.io/projected/607f4472-6658-48ef-ba52-4b6b097eaa2e-kube-api-access-xbknj\") on node \"crc\" DevicePath \"\"" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.920930 4919 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/607f4472-6658-48ef-ba52-4b6b097eaa2e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 13:32:51 crc kubenswrapper[4919]: I0109 13:32:51.940159 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfzzn\" (UniqueName: \"kubernetes.io/projected/1ce56338-b322-46a4-b02c-2ae2b1bb5149-kube-api-access-lfzzn\") pod \"redhat-operators-qx45q\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.072683 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.073427 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.078239 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.078285 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.086316 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.086433 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.087125 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.153919 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c4dtn"] Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.157672 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.164399 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c4dtn"] Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.232717 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-catalog-content\") pod \"redhat-operators-c4dtn\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.232770 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-utilities\") pod \"redhat-operators-c4dtn\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.233095 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2469q\" (UniqueName: \"kubernetes.io/projected/870947c0-608c-48f9-a0c7-5f81a08255bf-kube-api-access-2469q\") pod \"redhat-operators-c4dtn\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.334775 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2469q\" (UniqueName: \"kubernetes.io/projected/870947c0-608c-48f9-a0c7-5f81a08255bf-kube-api-access-2469q\") pod \"redhat-operators-c4dtn\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.334933 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-catalog-content\") pod \"redhat-operators-c4dtn\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.334956 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-utilities\") pod \"redhat-operators-c4dtn\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.335663 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-catalog-content\") pod \"redhat-operators-c4dtn\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.335892 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-utilities\") pod \"redhat-operators-c4dtn\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.352114 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2469q\" (UniqueName: \"kubernetes.io/projected/870947c0-608c-48f9-a0c7-5f81a08255bf-kube-api-access-2469q\") pod \"redhat-operators-c4dtn\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.383291 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.384379 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.386943 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.394959 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.407593 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.437539 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe87eed-316d-4e57-8514-e717fc98b771-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0fe87eed-316d-4e57-8514-e717fc98b771\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.437853 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe87eed-316d-4e57-8514-e717fc98b771-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0fe87eed-316d-4e57-8514-e717fc98b771\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.482017 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.504560 4919 generic.go:334] "Generic (PLEG): container finished" podID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerID="2af8b4fc83afa54c3df14d7350fb5fc00803269ea6194b0b1d3e889612603c63" exitCode=0 Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.504655 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj7bg" event={"ID":"3bdb482c-0d44-43b3-b74f-d0ba01a861b0","Type":"ContainerDied","Data":"2af8b4fc83afa54c3df14d7350fb5fc00803269ea6194b0b1d3e889612603c63"} Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.520712 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.522605 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw" event={"ID":"607f4472-6658-48ef-ba52-4b6b097eaa2e","Type":"ContainerDied","Data":"73aa2e564e4bb0388ae9699406b7b1db6f76998b0f1af61db3198ef01ab7575d"} Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.522936 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73aa2e564e4bb0388ae9699406b7b1db6f76998b0f1af61db3198ef01ab7575d" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.538407 4919 generic.go:334] "Generic (PLEG): container finished" podID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerID="3037e079b10043cc6ecc46197b74cb1b32c040d14d843da38c176a9380297049" exitCode=0 Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.539720 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg7pw" event={"ID":"ef0b4efa-7cc4-48d3-be0e-7406620f6a84","Type":"ContainerDied","Data":"3037e079b10043cc6ecc46197b74cb1b32c040d14d843da38c176a9380297049"} Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.545920 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg7pw" event={"ID":"ef0b4efa-7cc4-48d3-be0e-7406620f6a84","Type":"ContainerStarted","Data":"d1d29207addb0e82cf9e707c93398885a05bc47f25b98de20dc836cfd42c34ab"} Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.540960 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe87eed-316d-4e57-8514-e717fc98b771-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0fe87eed-316d-4e57-8514-e717fc98b771\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.546262 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe87eed-316d-4e57-8514-e717fc98b771-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0fe87eed-316d-4e57-8514-e717fc98b771\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.546483 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe87eed-316d-4e57-8514-e717fc98b771-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0fe87eed-316d-4e57-8514-e717fc98b771\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.560351 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe87eed-316d-4e57-8514-e717fc98b771-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0fe87eed-316d-4e57-8514-e717fc98b771\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.560614 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dsc8" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.562562 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-r8h48" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.599009 4919 patch_prober.go:28] interesting pod/router-default-5444994796-jx754 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 13:32:52 crc kubenswrapper[4919]: [-]has-synced failed: reason withheld Jan 09 13:32:52 crc kubenswrapper[4919]: [+]process-running ok Jan 09 13:32:52 crc kubenswrapper[4919]: healthz check failed Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.599060 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jx754" podUID="0ea35295-83c1-498b-b190-7dad56fe323b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.611576 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qx45q"] Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.648937 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.649153 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.649345 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.657420 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.704377 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.751324 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.751401 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.760002 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.762274 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.794766 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.810599 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.869668 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-sjvr2" Jan 09 13:32:52 crc kubenswrapper[4919]: I0109 13:32:52.974150 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.104290 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c4dtn"] Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.227239 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 09 13:32:53 crc kubenswrapper[4919]: W0109 13:32:53.548364 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-51628f639d813e235dd50975dc37538690f5cd628cb4bb5ebfed462de04bf9f9 WatchSource:0}: Error finding container 51628f639d813e235dd50975dc37538690f5cd628cb4bb5ebfed462de04bf9f9: Status 404 returned error can't find the container with id 51628f639d813e235dd50975dc37538690f5cd628cb4bb5ebfed462de04bf9f9 Jan 09 13:32:53 crc kubenswrapper[4919]: W0109 13:32:53.552336 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-c46511d470b13c95855582346c475002838fcbb06b09c1c2a66f5bdb41e9b7dd WatchSource:0}: Error finding container c46511d470b13c95855582346c475002838fcbb06b09c1c2a66f5bdb41e9b7dd: Status 404 returned error can't find the container with id c46511d470b13c95855582346c475002838fcbb06b09c1c2a66f5bdb41e9b7dd Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.583424 4919 generic.go:334] "Generic (PLEG): container finished" podID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerID="202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b" exitCode=0 Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.583758 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4dtn" event={"ID":"870947c0-608c-48f9-a0c7-5f81a08255bf","Type":"ContainerDied","Data":"202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b"} Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.584812 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4dtn" event={"ID":"870947c0-608c-48f9-a0c7-5f81a08255bf","Type":"ContainerStarted","Data":"8ffe630e2813fdae6115154bd3c9cdb051cfdfb546c89d169fe27ea77d3bd579"} Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.592057 4919 generic.go:334] "Generic (PLEG): container finished" podID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerID="0a5dd65feb99b9dee93abebdf65fafaa2eac727ee8729c94986e69272873098f" exitCode=0 Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.592187 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qx45q" event={"ID":"1ce56338-b322-46a4-b02c-2ae2b1bb5149","Type":"ContainerDied","Data":"0a5dd65feb99b9dee93abebdf65fafaa2eac727ee8729c94986e69272873098f"} Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.592251 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qx45q" event={"ID":"1ce56338-b322-46a4-b02c-2ae2b1bb5149","Type":"ContainerStarted","Data":"1eb668111ef4606bc172dff2cc3a4ce919fa19b634efbbb11ded33e9036ab463"} Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.593934 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.606836 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.622955 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0fe87eed-316d-4e57-8514-e717fc98b771","Type":"ContainerStarted","Data":"211cf79b949ae3846e7e7c9dfff5dd51df81f4bf8d59e82875b82954f929749d"} Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.688930 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.688989 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.701908 4919 patch_prober.go:28] interesting pod/console-f9d7485db-bffts container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 09 13:32:53 crc kubenswrapper[4919]: I0109 13:32:53.701972 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bffts" podUID="58013dad-1347-4da5-8314-495388d1b5c2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.648023 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4e154ebdb279a6b1b0b8f49a4438307813328aa931aa3949ca446ea77aa6045f"} Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.648457 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3de6f8916ede31957527c9c858b447c3703eacf53b90339092f45276de9dd69e"} Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.649099 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.666518 4919 generic.go:334] "Generic (PLEG): container finished" podID="0fe87eed-316d-4e57-8514-e717fc98b771" containerID="e6a816529384a14b8dd4d233cc57f3fac31f7ae9195314df5e731c738c809895" exitCode=0 Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.667034 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0fe87eed-316d-4e57-8514-e717fc98b771","Type":"ContainerDied","Data":"e6a816529384a14b8dd4d233cc57f3fac31f7ae9195314df5e731c738c809895"} Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.677312 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a3b87fd92c324e0f2810f0394598a8cf10561f21f703b159b1e622423db8bd6f"} Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.677364 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"51628f639d813e235dd50975dc37538690f5cd628cb4bb5ebfed462de04bf9f9"} Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.687473 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"254e25c5185db0e77d9100399c6bc08f6da46aac8623038d6f98e49883a9b53d"} Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.687514 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c46511d470b13c95855582346c475002838fcbb06b09c1c2a66f5bdb41e9b7dd"} Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.689255 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.690562 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.694460 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jx754" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.698367 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.700299 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.712914 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.813784 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f20c754-53d5-477d-844f-2a62d1f52626-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1f20c754-53d5-477d-844f-2a62d1f52626\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.814110 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f20c754-53d5-477d-844f-2a62d1f52626-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1f20c754-53d5-477d-844f-2a62d1f52626\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.916158 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f20c754-53d5-477d-844f-2a62d1f52626-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1f20c754-53d5-477d-844f-2a62d1f52626\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.916255 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f20c754-53d5-477d-844f-2a62d1f52626-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1f20c754-53d5-477d-844f-2a62d1f52626\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.916339 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f20c754-53d5-477d-844f-2a62d1f52626-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1f20c754-53d5-477d-844f-2a62d1f52626\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 13:32:54 crc kubenswrapper[4919]: I0109 13:32:54.937750 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f20c754-53d5-477d-844f-2a62d1f52626-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1f20c754-53d5-477d-844f-2a62d1f52626\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 13:32:55 crc kubenswrapper[4919]: I0109 13:32:55.031184 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 13:32:55 crc kubenswrapper[4919]: I0109 13:32:55.477031 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 09 13:32:55 crc kubenswrapper[4919]: I0109 13:32:55.715052 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1f20c754-53d5-477d-844f-2a62d1f52626","Type":"ContainerStarted","Data":"7f0ef5a6343a603af4221b3a2a24e77c85ae2cca3af844e27fdbea034de264f0"} Jan 09 13:32:56 crc kubenswrapper[4919]: I0109 13:32:56.036918 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 13:32:56 crc kubenswrapper[4919]: I0109 13:32:56.133898 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe87eed-316d-4e57-8514-e717fc98b771-kube-api-access\") pod \"0fe87eed-316d-4e57-8514-e717fc98b771\" (UID: \"0fe87eed-316d-4e57-8514-e717fc98b771\") " Jan 09 13:32:56 crc kubenswrapper[4919]: I0109 13:32:56.133963 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe87eed-316d-4e57-8514-e717fc98b771-kubelet-dir\") pod \"0fe87eed-316d-4e57-8514-e717fc98b771\" (UID: \"0fe87eed-316d-4e57-8514-e717fc98b771\") " Jan 09 13:32:56 crc kubenswrapper[4919]: I0109 13:32:56.134127 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe87eed-316d-4e57-8514-e717fc98b771-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0fe87eed-316d-4e57-8514-e717fc98b771" (UID: "0fe87eed-316d-4e57-8514-e717fc98b771"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:32:56 crc kubenswrapper[4919]: I0109 13:32:56.134347 4919 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe87eed-316d-4e57-8514-e717fc98b771-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 13:32:56 crc kubenswrapper[4919]: I0109 13:32:56.141733 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe87eed-316d-4e57-8514-e717fc98b771-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0fe87eed-316d-4e57-8514-e717fc98b771" (UID: "0fe87eed-316d-4e57-8514-e717fc98b771"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:32:56 crc kubenswrapper[4919]: I0109 13:32:56.235919 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe87eed-316d-4e57-8514-e717fc98b771-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 13:32:56 crc kubenswrapper[4919]: I0109 13:32:56.797221 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0fe87eed-316d-4e57-8514-e717fc98b771","Type":"ContainerDied","Data":"211cf79b949ae3846e7e7c9dfff5dd51df81f4bf8d59e82875b82954f929749d"} Jan 09 13:32:56 crc kubenswrapper[4919]: I0109 13:32:56.797271 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="211cf79b949ae3846e7e7c9dfff5dd51df81f4bf8d59e82875b82954f929749d" Jan 09 13:32:56 crc kubenswrapper[4919]: I0109 13:32:56.797336 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 13:32:57 crc kubenswrapper[4919]: I0109 13:32:57.817879 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1f20c754-53d5-477d-844f-2a62d1f52626","Type":"ContainerStarted","Data":"394b84d841fedc59f8bf6319acf2148957f8980e662c40a98e24aef3bbd6b29d"} Jan 09 13:32:57 crc kubenswrapper[4919]: I0109 13:32:57.835623 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.8355812030000003 podStartE2EDuration="3.835581203s" podCreationTimestamp="2026-01-09 13:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:32:57.830983152 +0000 UTC m=+157.378822612" watchObservedRunningTime="2026-01-09 13:32:57.835581203 +0000 UTC m=+157.383420653" Jan 09 13:32:58 crc kubenswrapper[4919]: I0109 13:32:58.399502 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-vplw6" Jan 09 13:32:58 crc kubenswrapper[4919]: I0109 13:32:58.838817 4919 generic.go:334] "Generic (PLEG): container finished" podID="1f20c754-53d5-477d-844f-2a62d1f52626" containerID="394b84d841fedc59f8bf6319acf2148957f8980e662c40a98e24aef3bbd6b29d" exitCode=0 Jan 09 13:32:58 crc kubenswrapper[4919]: I0109 13:32:58.838870 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1f20c754-53d5-477d-844f-2a62d1f52626","Type":"ContainerDied","Data":"394b84d841fedc59f8bf6319acf2148957f8980e662c40a98e24aef3bbd6b29d"} Jan 09 13:33:03 crc kubenswrapper[4919]: I0109 13:33:03.737828 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:33:03 crc kubenswrapper[4919]: I0109 13:33:03.742506 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:33:06 crc kubenswrapper[4919]: I0109 13:33:06.044452 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ph5g6"] Jan 09 13:33:06 crc kubenswrapper[4919]: I0109 13:33:06.045102 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" podUID="48075d37-56ec-4015-a38a-94068ad47148" containerName="controller-manager" containerID="cri-o://2b5f9a0384810e48712eb27a6d7178a64c8a39901cb8674a7ea90dc51729cea8" gracePeriod=30 Jan 09 13:33:06 crc kubenswrapper[4919]: I0109 13:33:06.063508 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb"] Jan 09 13:33:06 crc kubenswrapper[4919]: I0109 13:33:06.063864 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" podUID="8c3f993d-59c9-444b-9882-cedb07c01c7a" containerName="route-controller-manager" containerID="cri-o://9726e9eee7703ac50b2c6cc82874afa5de3794a3663471f10d996033d6231e2f" gracePeriod=30 Jan 09 13:33:06 crc kubenswrapper[4919]: I0109 13:33:06.897278 4919 generic.go:334] "Generic (PLEG): container finished" podID="48075d37-56ec-4015-a38a-94068ad47148" containerID="2b5f9a0384810e48712eb27a6d7178a64c8a39901cb8674a7ea90dc51729cea8" exitCode=0 Jan 09 13:33:06 crc kubenswrapper[4919]: I0109 13:33:06.897353 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" event={"ID":"48075d37-56ec-4015-a38a-94068ad47148","Type":"ContainerDied","Data":"2b5f9a0384810e48712eb27a6d7178a64c8a39901cb8674a7ea90dc51729cea8"} Jan 09 13:33:06 crc kubenswrapper[4919]: I0109 13:33:06.900077 4919 generic.go:334] "Generic (PLEG): container finished" podID="8c3f993d-59c9-444b-9882-cedb07c01c7a" containerID="9726e9eee7703ac50b2c6cc82874afa5de3794a3663471f10d996033d6231e2f" exitCode=0 Jan 09 13:33:06 crc kubenswrapper[4919]: I0109 13:33:06.900141 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" event={"ID":"8c3f993d-59c9-444b-9882-cedb07c01c7a","Type":"ContainerDied","Data":"9726e9eee7703ac50b2c6cc82874afa5de3794a3663471f10d996033d6231e2f"} Jan 09 13:33:08 crc kubenswrapper[4919]: I0109 13:33:08.060921 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:33:08 crc kubenswrapper[4919]: I0109 13:33:08.073573 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a2e9878-6b0e-4328-a3ca-9f828fb105c9-metrics-certs\") pod \"network-metrics-daemon-xkhdz\" (UID: \"7a2e9878-6b0e-4328-a3ca-9f828fb105c9\") " pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:33:08 crc kubenswrapper[4919]: I0109 13:33:08.077881 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xkhdz" Jan 09 13:33:10 crc kubenswrapper[4919]: I0109 13:33:10.965582 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:33:13 crc kubenswrapper[4919]: I0109 13:33:13.060742 4919 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ph5g6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 13:33:13 crc kubenswrapper[4919]: I0109 13:33:13.061254 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" podUID="48075d37-56ec-4015-a38a-94068ad47148" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 13:33:14 crc kubenswrapper[4919]: I0109 13:33:14.660297 4919 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5q4vb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 13:33:14 crc kubenswrapper[4919]: I0109 13:33:14.660936 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" podUID="8c3f993d-59c9-444b-9882-cedb07c01c7a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 13:33:17 crc kubenswrapper[4919]: I0109 13:33:17.980534 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" event={"ID":"48075d37-56ec-4015-a38a-94068ad47148","Type":"ContainerDied","Data":"8c396010645ed73e4010529c1327eefcff5bb1ba5b5fab6ce0a66cd20cd3d61b"} Jan 09 13:33:17 crc kubenswrapper[4919]: I0109 13:33:17.980617 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c396010645ed73e4010529c1327eefcff5bb1ba5b5fab6ce0a66cd20cd3d61b" Jan 09 13:33:17 crc kubenswrapper[4919]: I0109 13:33:17.982534 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" event={"ID":"8c3f993d-59c9-444b-9882-cedb07c01c7a","Type":"ContainerDied","Data":"06bf366d91e8558ad567607e05452a46b72378fd4b0428723093351eebe73fae"} Jan 09 13:33:17 crc kubenswrapper[4919]: I0109 13:33:17.982559 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06bf366d91e8558ad567607e05452a46b72378fd4b0428723093351eebe73fae" Jan 09 13:33:17 crc kubenswrapper[4919]: I0109 13:33:17.984461 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1f20c754-53d5-477d-844f-2a62d1f52626","Type":"ContainerDied","Data":"7f0ef5a6343a603af4221b3a2a24e77c85ae2cca3af844e27fdbea034de264f0"} Jan 09 13:33:17 crc kubenswrapper[4919]: I0109 13:33:17.984491 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f0ef5a6343a603af4221b3a2a24e77c85ae2cca3af844e27fdbea034de264f0" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.027355 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.038186 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.063234 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115471 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c3f993d-59c9-444b-9882-cedb07c01c7a-serving-cert\") pod \"8c3f993d-59c9-444b-9882-cedb07c01c7a\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115536 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpxgb\" (UniqueName: \"kubernetes.io/projected/8c3f993d-59c9-444b-9882-cedb07c01c7a-kube-api-access-vpxgb\") pod \"8c3f993d-59c9-444b-9882-cedb07c01c7a\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115576 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-client-ca\") pod \"8c3f993d-59c9-444b-9882-cedb07c01c7a\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115606 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48075d37-56ec-4015-a38a-94068ad47148-serving-cert\") pod \"48075d37-56ec-4015-a38a-94068ad47148\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115623 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f20c754-53d5-477d-844f-2a62d1f52626-kube-api-access\") pod \"1f20c754-53d5-477d-844f-2a62d1f52626\" (UID: \"1f20c754-53d5-477d-844f-2a62d1f52626\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115646 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-config\") pod \"8c3f993d-59c9-444b-9882-cedb07c01c7a\" (UID: \"8c3f993d-59c9-444b-9882-cedb07c01c7a\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115666 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-client-ca\") pod \"48075d37-56ec-4015-a38a-94068ad47148\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115694 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-config\") pod \"48075d37-56ec-4015-a38a-94068ad47148\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115717 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-proxy-ca-bundles\") pod \"48075d37-56ec-4015-a38a-94068ad47148\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115735 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f20c754-53d5-477d-844f-2a62d1f52626-kubelet-dir\") pod \"1f20c754-53d5-477d-844f-2a62d1f52626\" (UID: \"1f20c754-53d5-477d-844f-2a62d1f52626\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.115750 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx8x6\" (UniqueName: \"kubernetes.io/projected/48075d37-56ec-4015-a38a-94068ad47148-kube-api-access-mx8x6\") pod \"48075d37-56ec-4015-a38a-94068ad47148\" (UID: \"48075d37-56ec-4015-a38a-94068ad47148\") " Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.125638 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-config" (OuterVolumeSpecName: "config") pod "48075d37-56ec-4015-a38a-94068ad47148" (UID: "48075d37-56ec-4015-a38a-94068ad47148"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.126069 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f20c754-53d5-477d-844f-2a62d1f52626-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1f20c754-53d5-477d-844f-2a62d1f52626" (UID: "1f20c754-53d5-477d-844f-2a62d1f52626"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.126897 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-client-ca" (OuterVolumeSpecName: "client-ca") pod "8c3f993d-59c9-444b-9882-cedb07c01c7a" (UID: "8c3f993d-59c9-444b-9882-cedb07c01c7a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.126896 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-config" (OuterVolumeSpecName: "config") pod "8c3f993d-59c9-444b-9882-cedb07c01c7a" (UID: "8c3f993d-59c9-444b-9882-cedb07c01c7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.127830 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-client-ca" (OuterVolumeSpecName: "client-ca") pod "48075d37-56ec-4015-a38a-94068ad47148" (UID: "48075d37-56ec-4015-a38a-94068ad47148"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.130122 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "48075d37-56ec-4015-a38a-94068ad47148" (UID: "48075d37-56ec-4015-a38a-94068ad47148"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.143930 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48075d37-56ec-4015-a38a-94068ad47148-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "48075d37-56ec-4015-a38a-94068ad47148" (UID: "48075d37-56ec-4015-a38a-94068ad47148"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.145112 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f20c754-53d5-477d-844f-2a62d1f52626-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1f20c754-53d5-477d-844f-2a62d1f52626" (UID: "1f20c754-53d5-477d-844f-2a62d1f52626"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.145650 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48075d37-56ec-4015-a38a-94068ad47148-kube-api-access-mx8x6" (OuterVolumeSpecName: "kube-api-access-mx8x6") pod "48075d37-56ec-4015-a38a-94068ad47148" (UID: "48075d37-56ec-4015-a38a-94068ad47148"). InnerVolumeSpecName "kube-api-access-mx8x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.159448 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3f993d-59c9-444b-9882-cedb07c01c7a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c3f993d-59c9-444b-9882-cedb07c01c7a" (UID: "8c3f993d-59c9-444b-9882-cedb07c01c7a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.159700 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3f993d-59c9-444b-9882-cedb07c01c7a-kube-api-access-vpxgb" (OuterVolumeSpecName: "kube-api-access-vpxgb") pod "8c3f993d-59c9-444b-9882-cedb07c01c7a" (UID: "8c3f993d-59c9-444b-9882-cedb07c01c7a"). InnerVolumeSpecName "kube-api-access-vpxgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.217823 4919 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.217881 4919 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f20c754-53d5-477d-844f-2a62d1f52626-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.217894 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx8x6\" (UniqueName: \"kubernetes.io/projected/48075d37-56ec-4015-a38a-94068ad47148-kube-api-access-mx8x6\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.217906 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c3f993d-59c9-444b-9882-cedb07c01c7a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.217915 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpxgb\" (UniqueName: \"kubernetes.io/projected/8c3f993d-59c9-444b-9882-cedb07c01c7a-kube-api-access-vpxgb\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.217954 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.217963 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48075d37-56ec-4015-a38a-94068ad47148-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.217972 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f20c754-53d5-477d-844f-2a62d1f52626-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.217981 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c3f993d-59c9-444b-9882-cedb07c01c7a-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.217991 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.218000 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48075d37-56ec-4015-a38a-94068ad47148-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.989520 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.989527 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ph5g6" Jan 09 13:33:18 crc kubenswrapper[4919]: I0109 13:33:18.989520 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 13:33:19 crc kubenswrapper[4919]: I0109 13:33:19.022443 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ph5g6"] Jan 09 13:33:19 crc kubenswrapper[4919]: I0109 13:33:19.025151 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ph5g6"] Jan 09 13:33:19 crc kubenswrapper[4919]: I0109 13:33:19.040358 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb"] Jan 09 13:33:19 crc kubenswrapper[4919]: I0109 13:33:19.044292 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5q4vb"] Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.662249 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn"] Jan 09 13:33:20 crc kubenswrapper[4919]: E0109 13:33:20.662838 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48075d37-56ec-4015-a38a-94068ad47148" containerName="controller-manager" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.662871 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="48075d37-56ec-4015-a38a-94068ad47148" containerName="controller-manager" Jan 09 13:33:20 crc kubenswrapper[4919]: E0109 13:33:20.662920 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3f993d-59c9-444b-9882-cedb07c01c7a" containerName="route-controller-manager" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.662939 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3f993d-59c9-444b-9882-cedb07c01c7a" containerName="route-controller-manager" Jan 09 13:33:20 crc kubenswrapper[4919]: E0109 13:33:20.662965 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe87eed-316d-4e57-8514-e717fc98b771" containerName="pruner" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.662983 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe87eed-316d-4e57-8514-e717fc98b771" containerName="pruner" Jan 09 13:33:20 crc kubenswrapper[4919]: E0109 13:33:20.663022 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f20c754-53d5-477d-844f-2a62d1f52626" containerName="pruner" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.663041 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f20c754-53d5-477d-844f-2a62d1f52626" containerName="pruner" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.663358 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f20c754-53d5-477d-844f-2a62d1f52626" containerName="pruner" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.663395 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3f993d-59c9-444b-9882-cedb07c01c7a" containerName="route-controller-manager" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.663420 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe87eed-316d-4e57-8514-e717fc98b771" containerName="pruner" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.663452 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="48075d37-56ec-4015-a38a-94068ad47148" containerName="controller-manager" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.664457 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.670939 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4"] Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.672433 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.679625 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn"] Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.708656 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.709425 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.709516 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.709537 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.709453 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.709442 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.709905 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.709934 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.710380 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.710743 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.711126 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.711850 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.719274 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4"] Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.722088 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.760919 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-client-ca\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.761037 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-client-ca\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.761079 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-serving-cert\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.761124 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-proxy-ca-bundles\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.761238 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-config\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.761403 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-config\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.761761 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45f52b0c-e774-4041-a2f8-7e17214d9c54-serving-cert\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.761844 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2c4d\" (UniqueName: \"kubernetes.io/projected/45f52b0c-e774-4041-a2f8-7e17214d9c54-kube-api-access-g2c4d\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.761927 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84kxm\" (UniqueName: \"kubernetes.io/projected/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-kube-api-access-84kxm\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.783839 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48075d37-56ec-4015-a38a-94068ad47148" path="/var/lib/kubelet/pods/48075d37-56ec-4015-a38a-94068ad47148/volumes" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.785717 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3f993d-59c9-444b-9882-cedb07c01c7a" path="/var/lib/kubelet/pods/8c3f993d-59c9-444b-9882-cedb07c01c7a/volumes" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.863842 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45f52b0c-e774-4041-a2f8-7e17214d9c54-serving-cert\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.864325 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2c4d\" (UniqueName: \"kubernetes.io/projected/45f52b0c-e774-4041-a2f8-7e17214d9c54-kube-api-access-g2c4d\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.864380 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84kxm\" (UniqueName: \"kubernetes.io/projected/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-kube-api-access-84kxm\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.864442 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-client-ca\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.864478 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-client-ca\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.864497 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-serving-cert\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.864523 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-proxy-ca-bundles\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.864551 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-config\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.864579 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-config\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.866398 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-config\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.868056 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-client-ca\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.869020 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-client-ca\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.869553 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-config\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.869741 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-proxy-ca-bundles\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.876206 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-serving-cert\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.883164 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45f52b0c-e774-4041-a2f8-7e17214d9c54-serving-cert\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:20 crc kubenswrapper[4919]: I0109 13:33:20.893615 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84kxm\" (UniqueName: \"kubernetes.io/projected/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-kube-api-access-84kxm\") pod \"controller-manager-5f787bb6b6-fx9n4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:21 crc kubenswrapper[4919]: I0109 13:33:21.054043 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:21 crc kubenswrapper[4919]: I0109 13:33:21.247617 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:33:21 crc kubenswrapper[4919]: I0109 13:33:21.248195 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:33:23 crc kubenswrapper[4919]: I0109 13:33:23.339122 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9gc57" Jan 09 13:33:23 crc kubenswrapper[4919]: I0109 13:33:23.403137 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2c4d\" (UniqueName: \"kubernetes.io/projected/45f52b0c-e774-4041-a2f8-7e17214d9c54-kube-api-access-g2c4d\") pod \"route-controller-manager-8c9588886-h5vnn\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:23 crc kubenswrapper[4919]: I0109 13:33:23.441901 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:23 crc kubenswrapper[4919]: E0109 13:33:23.685750 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 09 13:33:23 crc kubenswrapper[4919]: E0109 13:33:23.686098 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqb7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-69xx2_openshift-marketplace(11f13609-8588-44c4-b426-db71e94e93dd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 13:33:23 crc kubenswrapper[4919]: E0109 13:33:23.687338 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-69xx2" podUID="11f13609-8588-44c4-b426-db71e94e93dd" Jan 09 13:33:26 crc kubenswrapper[4919]: I0109 13:33:26.018363 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4"] Jan 09 13:33:26 crc kubenswrapper[4919]: I0109 13:33:26.107169 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn"] Jan 09 13:33:29 crc kubenswrapper[4919]: E0109 13:33:29.590305 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-69xx2" podUID="11f13609-8588-44c4-b426-db71e94e93dd" Jan 09 13:33:29 crc kubenswrapper[4919]: E0109 13:33:29.786635 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 09 13:33:29 crc kubenswrapper[4919]: E0109 13:33:29.786836 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2469q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-c4dtn_openshift-marketplace(870947c0-608c-48f9-a0c7-5f81a08255bf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 13:33:29 crc kubenswrapper[4919]: E0109 13:33:29.788057 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-c4dtn" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.278629 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.280067 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.287677 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.287867 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.294410 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.454707 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89af3420-c053-4740-af93-3c64ebe30d82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"89af3420-c053-4740-af93-3c64ebe30d82\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.455195 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89af3420-c053-4740-af93-3c64ebe30d82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"89af3420-c053-4740-af93-3c64ebe30d82\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.556765 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89af3420-c053-4740-af93-3c64ebe30d82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"89af3420-c053-4740-af93-3c64ebe30d82\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.556902 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89af3420-c053-4740-af93-3c64ebe30d82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"89af3420-c053-4740-af93-3c64ebe30d82\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.556972 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89af3420-c053-4740-af93-3c64ebe30d82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"89af3420-c053-4740-af93-3c64ebe30d82\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.584442 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89af3420-c053-4740-af93-3c64ebe30d82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"89af3420-c053-4740-af93-3c64ebe30d82\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 13:33:32 crc kubenswrapper[4919]: I0109 13:33:32.623823 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 13:33:34 crc kubenswrapper[4919]: I0109 13:33:34.070452 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.267747 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c4dtn" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.378249 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.378493 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xsbdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dg7pw_openshift-marketplace(ef0b4efa-7cc4-48d3-be0e-7406620f6a84): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.379634 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dg7pw" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.382300 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.382436 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4x6wj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xvr9v_openshift-marketplace(691c6d86-b150-4576-872d-004862dcbd22): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.383886 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xvr9v" podUID="691c6d86-b150-4576-872d-004862dcbd22" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.407292 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.407547 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8c827,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bj7bg_openshift-marketplace(3bdb482c-0d44-43b3-b74f-d0ba01a861b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.408740 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bj7bg" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.412609 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.412765 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfzzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qx45q_openshift-marketplace(1ce56338-b322-46a4-b02c-2ae2b1bb5149): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 13:33:35 crc kubenswrapper[4919]: E0109 13:33:35.413937 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qx45q" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.670501 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.675152 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.682190 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.738581 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-var-lock\") pod \"installer-9-crc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.738744 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d7b247c-486d-49ca-b26c-d25bca0471bc-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.738789 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.839950 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d7b247c-486d-49ca-b26c-d25bca0471bc-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.840001 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.840067 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-var-lock\") pod \"installer-9-crc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.840145 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-var-lock\") pod \"installer-9-crc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.840611 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.866166 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d7b247c-486d-49ca-b26c-d25bca0471bc-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:36 crc kubenswrapper[4919]: I0109 13:33:36.997004 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:33:37 crc kubenswrapper[4919]: E0109 13:33:37.074564 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xvr9v" podUID="691c6d86-b150-4576-872d-004862dcbd22" Jan 09 13:33:37 crc kubenswrapper[4919]: E0109 13:33:37.074597 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qx45q" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" Jan 09 13:33:37 crc kubenswrapper[4919]: E0109 13:33:37.074692 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dg7pw" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" Jan 09 13:33:37 crc kubenswrapper[4919]: E0109 13:33:37.074808 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bj7bg" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" Jan 09 13:33:37 crc kubenswrapper[4919]: E0109 13:33:37.148160 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 09 13:33:37 crc kubenswrapper[4919]: E0109 13:33:37.148467 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zd7pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xppnp_openshift-marketplace(a7ddc148-0c1a-496f-b58b-c88f30af7344): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 13:33:37 crc kubenswrapper[4919]: E0109 13:33:37.149582 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xppnp" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" Jan 09 13:33:37 crc kubenswrapper[4919]: E0109 13:33:37.182840 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 09 13:33:37 crc kubenswrapper[4919]: E0109 13:33:37.183076 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bw6nc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tf6wk_openshift-marketplace(18b90207-0827-4db3-b0ca-e622b58ed504): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 13:33:37 crc kubenswrapper[4919]: E0109 13:33:37.184396 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tf6wk" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" Jan 09 13:33:37 crc kubenswrapper[4919]: I0109 13:33:37.525098 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xkhdz"] Jan 09 13:33:37 crc kubenswrapper[4919]: I0109 13:33:37.574848 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 09 13:33:37 crc kubenswrapper[4919]: I0109 13:33:37.579552 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 09 13:33:37 crc kubenswrapper[4919]: W0109 13:33:37.591355 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod89af3420_c053_4740_af93_3c64ebe30d82.slice/crio-fd2e2ca20e32447bd6ed2d492afd88b61c2a9efcb993d1a2d6be856873064711 WatchSource:0}: Error finding container fd2e2ca20e32447bd6ed2d492afd88b61c2a9efcb993d1a2d6be856873064711: Status 404 returned error can't find the container with id fd2e2ca20e32447bd6ed2d492afd88b61c2a9efcb993d1a2d6be856873064711 Jan 09 13:33:37 crc kubenswrapper[4919]: W0109 13:33:37.594399 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3d7b247c_486d_49ca_b26c_d25bca0471bc.slice/crio-26594057a8c2a29b268d6169e35be14ffa3c80af15ec7afb78023e6ee589d0d0 WatchSource:0}: Error finding container 26594057a8c2a29b268d6169e35be14ffa3c80af15ec7afb78023e6ee589d0d0: Status 404 returned error can't find the container with id 26594057a8c2a29b268d6169e35be14ffa3c80af15ec7afb78023e6ee589d0d0 Jan 09 13:33:37 crc kubenswrapper[4919]: I0109 13:33:37.597104 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4"] Jan 09 13:33:37 crc kubenswrapper[4919]: W0109 13:33:37.619995 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc242f3a0_b995_4b4d_a863_4bf6fd19c0a4.slice/crio-c101c78b7a3a6c9e462f8d8cee1eeca3ce5051c6c6c8a2367e087abc7d40df97 WatchSource:0}: Error finding container c101c78b7a3a6c9e462f8d8cee1eeca3ce5051c6c6c8a2367e087abc7d40df97: Status 404 returned error can't find the container with id c101c78b7a3a6c9e462f8d8cee1eeca3ce5051c6c6c8a2367e087abc7d40df97 Jan 09 13:33:37 crc kubenswrapper[4919]: I0109 13:33:37.630795 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn"] Jan 09 13:33:37 crc kubenswrapper[4919]: W0109 13:33:37.649899 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45f52b0c_e774_4041_a2f8_7e17214d9c54.slice/crio-f5d5b68c2491d55bdd12d3ba8a0d0e9deca760a347183b47b5fb46986d2b7a25 WatchSource:0}: Error finding container f5d5b68c2491d55bdd12d3ba8a0d0e9deca760a347183b47b5fb46986d2b7a25: Status 404 returned error can't find the container with id f5d5b68c2491d55bdd12d3ba8a0d0e9deca760a347183b47b5fb46986d2b7a25 Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.120519 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" event={"ID":"45f52b0c-e774-4041-a2f8-7e17214d9c54","Type":"ContainerStarted","Data":"6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4"} Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.120934 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.120946 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" event={"ID":"45f52b0c-e774-4041-a2f8-7e17214d9c54","Type":"ContainerStarted","Data":"f5d5b68c2491d55bdd12d3ba8a0d0e9deca760a347183b47b5fb46986d2b7a25"} Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.121148 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" podUID="45f52b0c-e774-4041-a2f8-7e17214d9c54" containerName="route-controller-manager" containerID="cri-o://6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4" gracePeriod=30 Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.123557 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" event={"ID":"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4","Type":"ContainerStarted","Data":"1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81"} Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.123610 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" event={"ID":"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4","Type":"ContainerStarted","Data":"c101c78b7a3a6c9e462f8d8cee1eeca3ce5051c6c6c8a2367e087abc7d40df97"} Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.123648 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" podUID="c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" containerName="controller-manager" containerID="cri-o://1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81" gracePeriod=30 Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.123797 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.126467 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"89af3420-c053-4740-af93-3c64ebe30d82","Type":"ContainerStarted","Data":"20582c7d02b723a16a3ab1fa6fcc429ed6bb2c8f5155d9c792f6cd909ebd46cd"} Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.126495 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"89af3420-c053-4740-af93-3c64ebe30d82","Type":"ContainerStarted","Data":"fd2e2ca20e32447bd6ed2d492afd88b61c2a9efcb993d1a2d6be856873064711"} Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.130603 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d7b247c-486d-49ca-b26c-d25bca0471bc","Type":"ContainerStarted","Data":"bb59e6036117b013224cd1fff3680355697ce2fe509a1e11f5e1b3d6a4a4e165"} Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.130651 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d7b247c-486d-49ca-b26c-d25bca0471bc","Type":"ContainerStarted","Data":"26594057a8c2a29b268d6169e35be14ffa3c80af15ec7afb78023e6ee589d0d0"} Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.133648 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.134902 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" event={"ID":"7a2e9878-6b0e-4328-a3ca-9f828fb105c9","Type":"ContainerStarted","Data":"f3b76a689a8ed26a4c6dd3404c417a6974e5b7690982ff34fa6c0d4d96c6b4a2"} Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.134927 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" event={"ID":"7a2e9878-6b0e-4328-a3ca-9f828fb105c9","Type":"ContainerStarted","Data":"d31d031022a0ca2f37cf3fbe6aa911ed8726e389174a51e0a662f3e8030153dc"} Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.134939 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xkhdz" event={"ID":"7a2e9878-6b0e-4328-a3ca-9f828fb105c9","Type":"ContainerStarted","Data":"4d5790c331c8845b71e49a56e39e4d3fbfb12e8ad4f3b0ea6504cd79352b08c4"} Jan 09 13:33:38 crc kubenswrapper[4919]: E0109 13:33:38.135377 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tf6wk" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" Jan 09 13:33:38 crc kubenswrapper[4919]: E0109 13:33:38.135991 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xppnp" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.148070 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" podStartSLOduration=32.148040818 podStartE2EDuration="32.148040818s" podCreationTimestamp="2026-01-09 13:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:33:38.139662925 +0000 UTC m=+197.687502375" watchObservedRunningTime="2026-01-09 13:33:38.148040818 +0000 UTC m=+197.695880268" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.197313 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xkhdz" podStartSLOduration=173.197292153 podStartE2EDuration="2m53.197292153s" podCreationTimestamp="2026-01-09 13:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:33:38.193996893 +0000 UTC m=+197.741836343" watchObservedRunningTime="2026-01-09 13:33:38.197292153 +0000 UTC m=+197.745131603" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.223068 4919 patch_prober.go:28] interesting pod/route-controller-manager-8c9588886-h5vnn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:53356->10.217.0.54:8443: read: connection reset by peer" start-of-body= Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.223275 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" podUID="45f52b0c-e774-4041-a2f8-7e17214d9c54" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:53356->10.217.0.54:8443: read: connection reset by peer" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.241789 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=6.241770383 podStartE2EDuration="6.241770383s" podCreationTimestamp="2026-01-09 13:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:33:38.239324013 +0000 UTC m=+197.787163473" watchObservedRunningTime="2026-01-09 13:33:38.241770383 +0000 UTC m=+197.789609833" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.264212 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" podStartSLOduration=32.264179796 podStartE2EDuration="32.264179796s" podCreationTimestamp="2026-01-09 13:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:33:38.257194487 +0000 UTC m=+197.805033937" watchObservedRunningTime="2026-01-09 13:33:38.264179796 +0000 UTC m=+197.812019256" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.284908 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.284881059 podStartE2EDuration="2.284881059s" podCreationTimestamp="2026-01-09 13:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:33:38.275403539 +0000 UTC m=+197.823242989" watchObservedRunningTime="2026-01-09 13:33:38.284881059 +0000 UTC m=+197.832720509" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.560614 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8c9588886-h5vnn_45f52b0c-e774-4041-a2f8-7e17214d9c54/route-controller-manager/0.log" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.560729 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.567744 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.598023 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg"] Jan 09 13:33:38 crc kubenswrapper[4919]: E0109 13:33:38.598518 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" containerName="controller-manager" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.598547 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" containerName="controller-manager" Jan 09 13:33:38 crc kubenswrapper[4919]: E0109 13:33:38.598576 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f52b0c-e774-4041-a2f8-7e17214d9c54" containerName="route-controller-manager" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.598587 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f52b0c-e774-4041-a2f8-7e17214d9c54" containerName="route-controller-manager" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.598765 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" containerName="controller-manager" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.598797 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f52b0c-e774-4041-a2f8-7e17214d9c54" containerName="route-controller-manager" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.599584 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.600763 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg"] Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.668447 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-config\") pod \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.668589 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-serving-cert\") pod \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.668663 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-proxy-ca-bundles\") pod \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.668718 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-client-ca\") pod \"45f52b0c-e774-4041-a2f8-7e17214d9c54\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.668765 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45f52b0c-e774-4041-a2f8-7e17214d9c54-serving-cert\") pod \"45f52b0c-e774-4041-a2f8-7e17214d9c54\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.668866 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-client-ca\") pod \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.668961 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84kxm\" (UniqueName: \"kubernetes.io/projected/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-kube-api-access-84kxm\") pod \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\" (UID: \"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4\") " Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.669007 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2c4d\" (UniqueName: \"kubernetes.io/projected/45f52b0c-e774-4041-a2f8-7e17214d9c54-kube-api-access-g2c4d\") pod \"45f52b0c-e774-4041-a2f8-7e17214d9c54\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.669088 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-config\") pod \"45f52b0c-e774-4041-a2f8-7e17214d9c54\" (UID: \"45f52b0c-e774-4041-a2f8-7e17214d9c54\") " Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.669705 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" (UID: "c242f3a0-b995-4b4d-a863-4bf6fd19c0a4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.669954 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-config" (OuterVolumeSpecName: "config") pod "45f52b0c-e774-4041-a2f8-7e17214d9c54" (UID: "45f52b0c-e774-4041-a2f8-7e17214d9c54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.670112 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-client-ca" (OuterVolumeSpecName: "client-ca") pod "45f52b0c-e774-4041-a2f8-7e17214d9c54" (UID: "45f52b0c-e774-4041-a2f8-7e17214d9c54"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.670425 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-client-ca\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.670532 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvp4b\" (UniqueName: \"kubernetes.io/projected/228725da-7a75-4f10-9857-dc893be79fc8-kube-api-access-bvp4b\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.670644 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-config" (OuterVolumeSpecName: "config") pod "c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" (UID: "c242f3a0-b995-4b4d-a863-4bf6fd19c0a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.670651 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" (UID: "c242f3a0-b995-4b4d-a863-4bf6fd19c0a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.670694 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228725da-7a75-4f10-9857-dc893be79fc8-serving-cert\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.670991 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-config\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.671141 4919 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.671161 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.671176 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.671189 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45f52b0c-e774-4041-a2f8-7e17214d9c54-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.671203 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.674887 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-kube-api-access-84kxm" (OuterVolumeSpecName: "kube-api-access-84kxm") pod "c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" (UID: "c242f3a0-b995-4b4d-a863-4bf6fd19c0a4"). InnerVolumeSpecName "kube-api-access-84kxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.674891 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45f52b0c-e774-4041-a2f8-7e17214d9c54-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45f52b0c-e774-4041-a2f8-7e17214d9c54" (UID: "45f52b0c-e774-4041-a2f8-7e17214d9c54"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.674956 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f52b0c-e774-4041-a2f8-7e17214d9c54-kube-api-access-g2c4d" (OuterVolumeSpecName: "kube-api-access-g2c4d") pod "45f52b0c-e774-4041-a2f8-7e17214d9c54" (UID: "45f52b0c-e774-4041-a2f8-7e17214d9c54"). InnerVolumeSpecName "kube-api-access-g2c4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.675222 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" (UID: "c242f3a0-b995-4b4d-a863-4bf6fd19c0a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.772272 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-config\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.772336 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-client-ca\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.772387 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvp4b\" (UniqueName: \"kubernetes.io/projected/228725da-7a75-4f10-9857-dc893be79fc8-kube-api-access-bvp4b\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.772422 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228725da-7a75-4f10-9857-dc893be79fc8-serving-cert\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.772473 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45f52b0c-e774-4041-a2f8-7e17214d9c54-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.772486 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84kxm\" (UniqueName: \"kubernetes.io/projected/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-kube-api-access-84kxm\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.772500 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2c4d\" (UniqueName: \"kubernetes.io/projected/45f52b0c-e774-4041-a2f8-7e17214d9c54-kube-api-access-g2c4d\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.772510 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.773770 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-client-ca\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.773822 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-config\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.780083 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228725da-7a75-4f10-9857-dc893be79fc8-serving-cert\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.791760 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvp4b\" (UniqueName: \"kubernetes.io/projected/228725da-7a75-4f10-9857-dc893be79fc8-kube-api-access-bvp4b\") pod \"route-controller-manager-6b4b75f447-brtrg\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:38 crc kubenswrapper[4919]: I0109 13:33:38.962770 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.148464 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg"] Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.149349 4919 generic.go:334] "Generic (PLEG): container finished" podID="89af3420-c053-4740-af93-3c64ebe30d82" containerID="20582c7d02b723a16a3ab1fa6fcc429ed6bb2c8f5155d9c792f6cd909ebd46cd" exitCode=0 Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.149443 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"89af3420-c053-4740-af93-3c64ebe30d82","Type":"ContainerDied","Data":"20582c7d02b723a16a3ab1fa6fcc429ed6bb2c8f5155d9c792f6cd909ebd46cd"} Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.151734 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8c9588886-h5vnn_45f52b0c-e774-4041-a2f8-7e17214d9c54/route-controller-manager/0.log" Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.151817 4919 generic.go:334] "Generic (PLEG): container finished" podID="45f52b0c-e774-4041-a2f8-7e17214d9c54" containerID="6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4" exitCode=255 Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.151873 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" event={"ID":"45f52b0c-e774-4041-a2f8-7e17214d9c54","Type":"ContainerDied","Data":"6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4"} Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.151899 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" event={"ID":"45f52b0c-e774-4041-a2f8-7e17214d9c54","Type":"ContainerDied","Data":"f5d5b68c2491d55bdd12d3ba8a0d0e9deca760a347183b47b5fb46986d2b7a25"} Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.151922 4919 scope.go:117] "RemoveContainer" containerID="6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4" Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.151921 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn" Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.155103 4919 generic.go:334] "Generic (PLEG): container finished" podID="c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" containerID="1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81" exitCode=0 Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.155190 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.155270 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" event={"ID":"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4","Type":"ContainerDied","Data":"1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81"} Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.155304 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4" event={"ID":"c242f3a0-b995-4b4d-a863-4bf6fd19c0a4","Type":"ContainerDied","Data":"c101c78b7a3a6c9e462f8d8cee1eeca3ce5051c6c6c8a2367e087abc7d40df97"} Jan 09 13:33:39 crc kubenswrapper[4919]: W0109 13:33:39.160886 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod228725da_7a75_4f10_9857_dc893be79fc8.slice/crio-cf7372cea6a4801bbef0f1f0a4fd74959f86c721b0a1d2a8144a78be52fb4a11 WatchSource:0}: Error finding container cf7372cea6a4801bbef0f1f0a4fd74959f86c721b0a1d2a8144a78be52fb4a11: Status 404 returned error can't find the container with id cf7372cea6a4801bbef0f1f0a4fd74959f86c721b0a1d2a8144a78be52fb4a11 Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.198759 4919 scope.go:117] "RemoveContainer" containerID="6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4" Jan 09 13:33:39 crc kubenswrapper[4919]: E0109 13:33:39.200676 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4\": container with ID starting with 6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4 not found: ID does not exist" containerID="6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4" Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.200707 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4"} err="failed to get container status \"6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4\": rpc error: code = NotFound desc = could not find container \"6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4\": container with ID starting with 6b63be533f357b5e7a74ba18676b8c44b57c808c1060a2ba4a97c7f7db73cfd4 not found: ID does not exist" Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.200744 4919 scope.go:117] "RemoveContainer" containerID="1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81" Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.210350 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn"] Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.213187 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c9588886-h5vnn"] Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.215665 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4"] Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.217935 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f787bb6b6-fx9n4"] Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.243756 4919 scope.go:117] "RemoveContainer" containerID="1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81" Jan 09 13:33:39 crc kubenswrapper[4919]: E0109 13:33:39.244703 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81\": container with ID starting with 1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81 not found: ID does not exist" containerID="1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81" Jan 09 13:33:39 crc kubenswrapper[4919]: I0109 13:33:39.244746 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81"} err="failed to get container status \"1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81\": rpc error: code = NotFound desc = could not find container \"1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81\": container with ID starting with 1dad76f9496335a571c7b0d87b5571e3d9cb0f24f9675cafbc253f7d7554de81 not found: ID does not exist" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.164506 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" event={"ID":"228725da-7a75-4f10-9857-dc893be79fc8","Type":"ContainerStarted","Data":"50d5fda6df8ed1f714b828e9705d6045c15c2db1c0888429ec8634c214cac21c"} Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.164851 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.164866 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" event={"ID":"228725da-7a75-4f10-9857-dc893be79fc8","Type":"ContainerStarted","Data":"cf7372cea6a4801bbef0f1f0a4fd74959f86c721b0a1d2a8144a78be52fb4a11"} Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.172207 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.187251 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" podStartSLOduration=14.187193401 podStartE2EDuration="14.187193401s" podCreationTimestamp="2026-01-09 13:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:33:40.186189937 +0000 UTC m=+199.734029467" watchObservedRunningTime="2026-01-09 13:33:40.187193401 +0000 UTC m=+199.735032881" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.444794 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.502056 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89af3420-c053-4740-af93-3c64ebe30d82-kubelet-dir\") pod \"89af3420-c053-4740-af93-3c64ebe30d82\" (UID: \"89af3420-c053-4740-af93-3c64ebe30d82\") " Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.502171 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89af3420-c053-4740-af93-3c64ebe30d82-kube-api-access\") pod \"89af3420-c053-4740-af93-3c64ebe30d82\" (UID: \"89af3420-c053-4740-af93-3c64ebe30d82\") " Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.502377 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89af3420-c053-4740-af93-3c64ebe30d82-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89af3420-c053-4740-af93-3c64ebe30d82" (UID: "89af3420-c053-4740-af93-3c64ebe30d82"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.508468 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89af3420-c053-4740-af93-3c64ebe30d82-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89af3420-c053-4740-af93-3c64ebe30d82" (UID: "89af3420-c053-4740-af93-3c64ebe30d82"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.603461 4919 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89af3420-c053-4740-af93-3c64ebe30d82-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.603934 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89af3420-c053-4740-af93-3c64ebe30d82-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.765878 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f52b0c-e774-4041-a2f8-7e17214d9c54" path="/var/lib/kubelet/pods/45f52b0c-e774-4041-a2f8-7e17214d9c54/volumes" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.767306 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c242f3a0-b995-4b4d-a863-4bf6fd19c0a4" path="/var/lib/kubelet/pods/c242f3a0-b995-4b4d-a863-4bf6fd19c0a4/volumes" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.815026 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9"] Jan 09 13:33:40 crc kubenswrapper[4919]: E0109 13:33:40.815341 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89af3420-c053-4740-af93-3c64ebe30d82" containerName="pruner" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.815358 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="89af3420-c053-4740-af93-3c64ebe30d82" containerName="pruner" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.815489 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="89af3420-c053-4740-af93-3c64ebe30d82" containerName="pruner" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.815981 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.820961 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.821410 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.821470 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.823265 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.823294 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.824191 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.894074 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.896415 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9"] Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.907077 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-serving-cert\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.907137 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxcf5\" (UniqueName: \"kubernetes.io/projected/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-kube-api-access-wxcf5\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.907195 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-proxy-ca-bundles\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.907269 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-client-ca\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:40 crc kubenswrapper[4919]: I0109 13:33:40.907301 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-config\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.008295 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-proxy-ca-bundles\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.008372 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-client-ca\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.008403 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-config\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.008450 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-serving-cert\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.008484 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxcf5\" (UniqueName: \"kubernetes.io/projected/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-kube-api-access-wxcf5\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.009456 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-client-ca\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.009695 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-proxy-ca-bundles\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.010572 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-config\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.013528 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-serving-cert\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.029979 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxcf5\" (UniqueName: \"kubernetes.io/projected/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-kube-api-access-wxcf5\") pod \"controller-manager-6b4fcf649d-jt4m9\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.177400 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"89af3420-c053-4740-af93-3c64ebe30d82","Type":"ContainerDied","Data":"fd2e2ca20e32447bd6ed2d492afd88b61c2a9efcb993d1a2d6be856873064711"} Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.177473 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd2e2ca20e32447bd6ed2d492afd88b61c2a9efcb993d1a2d6be856873064711" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.177546 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.211251 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:41 crc kubenswrapper[4919]: I0109 13:33:41.442597 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9"] Jan 09 13:33:42 crc kubenswrapper[4919]: I0109 13:33:42.187863 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" event={"ID":"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a","Type":"ContainerStarted","Data":"341697f9d87f4cb00589f29fe6e619bc41ad943b812d59475110c522fe727dee"} Jan 09 13:33:42 crc kubenswrapper[4919]: I0109 13:33:42.188256 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" event={"ID":"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a","Type":"ContainerStarted","Data":"4fb1ac1055ede55a224b3155ead09b0a9f53fe2958cc51a78735196859b735f6"} Jan 09 13:33:42 crc kubenswrapper[4919]: I0109 13:33:42.188356 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:42 crc kubenswrapper[4919]: I0109 13:33:42.195294 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:33:42 crc kubenswrapper[4919]: I0109 13:33:42.214529 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" podStartSLOduration=16.214504187 podStartE2EDuration="16.214504187s" podCreationTimestamp="2026-01-09 13:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:33:42.212835894 +0000 UTC m=+201.760675384" watchObservedRunningTime="2026-01-09 13:33:42.214504187 +0000 UTC m=+201.762343657" Jan 09 13:33:46 crc kubenswrapper[4919]: I0109 13:33:46.248991 4919 generic.go:334] "Generic (PLEG): container finished" podID="11f13609-8588-44c4-b426-db71e94e93dd" containerID="422d2367d00584eaff7db0ea4060d12910f96a51fe167a71259f19639059623d" exitCode=0 Jan 09 13:33:46 crc kubenswrapper[4919]: I0109 13:33:46.249055 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xx2" event={"ID":"11f13609-8588-44c4-b426-db71e94e93dd","Type":"ContainerDied","Data":"422d2367d00584eaff7db0ea4060d12910f96a51fe167a71259f19639059623d"} Jan 09 13:33:48 crc kubenswrapper[4919]: I0109 13:33:48.262124 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4dtn" event={"ID":"870947c0-608c-48f9-a0c7-5f81a08255bf","Type":"ContainerStarted","Data":"aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07"} Jan 09 13:33:48 crc kubenswrapper[4919]: I0109 13:33:48.266004 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xx2" event={"ID":"11f13609-8588-44c4-b426-db71e94e93dd","Type":"ContainerStarted","Data":"09312b0b662f12d2007060760ee3c2a4f2c6a3111d53b9f5df41b6b631f4b201"} Jan 09 13:33:48 crc kubenswrapper[4919]: I0109 13:33:48.300991 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-69xx2" podStartSLOduration=2.862742601 podStartE2EDuration="59.300972549s" podCreationTimestamp="2026-01-09 13:32:49 +0000 UTC" firstStartedPulling="2026-01-09 13:32:51.447389522 +0000 UTC m=+150.995228972" lastFinishedPulling="2026-01-09 13:33:47.88561947 +0000 UTC m=+207.433458920" observedRunningTime="2026-01-09 13:33:48.299721257 +0000 UTC m=+207.847560697" watchObservedRunningTime="2026-01-09 13:33:48.300972549 +0000 UTC m=+207.848811999" Jan 09 13:33:49 crc kubenswrapper[4919]: I0109 13:33:49.274506 4919 generic.go:334] "Generic (PLEG): container finished" podID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerID="aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07" exitCode=0 Jan 09 13:33:49 crc kubenswrapper[4919]: I0109 13:33:49.274580 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4dtn" event={"ID":"870947c0-608c-48f9-a0c7-5f81a08255bf","Type":"ContainerDied","Data":"aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07"} Jan 09 13:33:49 crc kubenswrapper[4919]: I0109 13:33:49.569911 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:33:49 crc kubenswrapper[4919]: I0109 13:33:49.570469 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:33:50 crc kubenswrapper[4919]: I0109 13:33:50.283878 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4dtn" event={"ID":"870947c0-608c-48f9-a0c7-5f81a08255bf","Type":"ContainerStarted","Data":"2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb"} Jan 09 13:33:50 crc kubenswrapper[4919]: I0109 13:33:50.286688 4919 generic.go:334] "Generic (PLEG): container finished" podID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerID="2d567bc54177af0ce81c71cb038e8723df384b9c99a31a1f13b4bd4ca90af943" exitCode=0 Jan 09 13:33:50 crc kubenswrapper[4919]: I0109 13:33:50.287144 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg7pw" event={"ID":"ef0b4efa-7cc4-48d3-be0e-7406620f6a84","Type":"ContainerDied","Data":"2d567bc54177af0ce81c71cb038e8723df384b9c99a31a1f13b4bd4ca90af943"} Jan 09 13:33:50 crc kubenswrapper[4919]: I0109 13:33:50.326579 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c4dtn" podStartSLOduration=2.206217063 podStartE2EDuration="58.326551975s" podCreationTimestamp="2026-01-09 13:32:52 +0000 UTC" firstStartedPulling="2026-01-09 13:32:53.587316481 +0000 UTC m=+153.135155931" lastFinishedPulling="2026-01-09 13:33:49.707651393 +0000 UTC m=+209.255490843" observedRunningTime="2026-01-09 13:33:50.315128111 +0000 UTC m=+209.862967561" watchObservedRunningTime="2026-01-09 13:33:50.326551975 +0000 UTC m=+209.874391425" Jan 09 13:33:50 crc kubenswrapper[4919]: I0109 13:33:50.626583 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-69xx2" podUID="11f13609-8588-44c4-b426-db71e94e93dd" containerName="registry-server" probeResult="failure" output=< Jan 09 13:33:50 crc kubenswrapper[4919]: timeout: failed to connect service ":50051" within 1s Jan 09 13:33:50 crc kubenswrapper[4919]: > Jan 09 13:33:51 crc kubenswrapper[4919]: I0109 13:33:51.247386 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:33:51 crc kubenswrapper[4919]: I0109 13:33:51.247817 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:33:51 crc kubenswrapper[4919]: I0109 13:33:51.247893 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:33:51 crc kubenswrapper[4919]: I0109 13:33:51.248819 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 13:33:51 crc kubenswrapper[4919]: I0109 13:33:51.248903 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e" gracePeriod=600 Jan 09 13:33:51 crc kubenswrapper[4919]: I0109 13:33:51.297191 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg7pw" event={"ID":"ef0b4efa-7cc4-48d3-be0e-7406620f6a84","Type":"ContainerStarted","Data":"3b654d149a5db355e0134087bf5729140791c355e05095d96c65598aa8740795"} Jan 09 13:33:51 crc kubenswrapper[4919]: I0109 13:33:51.530657 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:33:51 crc kubenswrapper[4919]: I0109 13:33:51.535520 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:33:51 crc kubenswrapper[4919]: I0109 13:33:51.783906 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dg7pw" podStartSLOduration=2.507612536 podStartE2EDuration="1m0.783882673s" podCreationTimestamp="2026-01-09 13:32:51 +0000 UTC" firstStartedPulling="2026-01-09 13:32:52.544658029 +0000 UTC m=+152.092497479" lastFinishedPulling="2026-01-09 13:33:50.820928166 +0000 UTC m=+210.368767616" observedRunningTime="2026-01-09 13:33:51.32984099 +0000 UTC m=+210.877680440" watchObservedRunningTime="2026-01-09 13:33:51.783882673 +0000 UTC m=+211.331722133" Jan 09 13:33:52 crc kubenswrapper[4919]: I0109 13:33:52.306928 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e" exitCode=0 Jan 09 13:33:52 crc kubenswrapper[4919]: I0109 13:33:52.307024 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e"} Jan 09 13:33:52 crc kubenswrapper[4919]: I0109 13:33:52.483195 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:33:52 crc kubenswrapper[4919]: I0109 13:33:52.483535 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:33:52 crc kubenswrapper[4919]: I0109 13:33:52.573262 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dg7pw" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerName="registry-server" probeResult="failure" output=< Jan 09 13:33:52 crc kubenswrapper[4919]: timeout: failed to connect service ":50051" within 1s Jan 09 13:33:52 crc kubenswrapper[4919]: > Jan 09 13:33:53 crc kubenswrapper[4919]: I0109 13:33:53.316056 4919 generic.go:334] "Generic (PLEG): container finished" podID="691c6d86-b150-4576-872d-004862dcbd22" containerID="8126e5f1479e8f4d893c01ae9917d30a35e74befe2a19c7444efa00f29783554" exitCode=0 Jan 09 13:33:53 crc kubenswrapper[4919]: I0109 13:33:53.316130 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvr9v" event={"ID":"691c6d86-b150-4576-872d-004862dcbd22","Type":"ContainerDied","Data":"8126e5f1479e8f4d893c01ae9917d30a35e74befe2a19c7444efa00f29783554"} Jan 09 13:33:53 crc kubenswrapper[4919]: I0109 13:33:53.323852 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"37d9c7803cd79faa7ac0a37f20abf614a5efbd31913cca12e52b150e758b14ec"} Jan 09 13:33:53 crc kubenswrapper[4919]: I0109 13:33:53.532408 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c4dtn" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerName="registry-server" probeResult="failure" output=< Jan 09 13:33:53 crc kubenswrapper[4919]: timeout: failed to connect service ":50051" within 1s Jan 09 13:33:53 crc kubenswrapper[4919]: > Jan 09 13:33:54 crc kubenswrapper[4919]: I0109 13:33:54.330925 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tf6wk" event={"ID":"18b90207-0827-4db3-b0ca-e622b58ed504","Type":"ContainerStarted","Data":"ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2"} Jan 09 13:33:54 crc kubenswrapper[4919]: I0109 13:33:54.332836 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvr9v" event={"ID":"691c6d86-b150-4576-872d-004862dcbd22","Type":"ContainerStarted","Data":"5aa0e405a0e9a962dc34bb62238a982e277577db22fe26419780420e7db19630"} Jan 09 13:33:54 crc kubenswrapper[4919]: I0109 13:33:54.337484 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qx45q" event={"ID":"1ce56338-b322-46a4-b02c-2ae2b1bb5149","Type":"ContainerStarted","Data":"70c4800f54b9cea09c32e486f293d325207195eaee1db1117bfac6ad89c5a551"} Jan 09 13:33:54 crc kubenswrapper[4919]: I0109 13:33:54.339868 4919 generic.go:334] "Generic (PLEG): container finished" podID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerID="adaea68e367263aa61cc347ceaede1e277478c4cba4fb116fe255889cbd9dd49" exitCode=0 Jan 09 13:33:54 crc kubenswrapper[4919]: I0109 13:33:54.339915 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj7bg" event={"ID":"3bdb482c-0d44-43b3-b74f-d0ba01a861b0","Type":"ContainerDied","Data":"adaea68e367263aa61cc347ceaede1e277478c4cba4fb116fe255889cbd9dd49"} Jan 09 13:33:54 crc kubenswrapper[4919]: I0109 13:33:54.342198 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xppnp" event={"ID":"a7ddc148-0c1a-496f-b58b-c88f30af7344","Type":"ContainerStarted","Data":"f6da9fc2fcaaa62b53f662d1ee96c34b89419a98540daf970139e8373082b94c"} Jan 09 13:33:54 crc kubenswrapper[4919]: I0109 13:33:54.403404 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xvr9v" podStartSLOduration=2.709861686 podStartE2EDuration="1m6.403376969s" podCreationTimestamp="2026-01-09 13:32:48 +0000 UTC" firstStartedPulling="2026-01-09 13:32:50.390015513 +0000 UTC m=+149.937854963" lastFinishedPulling="2026-01-09 13:33:54.083530796 +0000 UTC m=+213.631370246" observedRunningTime="2026-01-09 13:33:54.399482079 +0000 UTC m=+213.947321529" watchObservedRunningTime="2026-01-09 13:33:54.403376969 +0000 UTC m=+213.951216419" Jan 09 13:33:55 crc kubenswrapper[4919]: I0109 13:33:55.356384 4919 generic.go:334] "Generic (PLEG): container finished" podID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerID="70c4800f54b9cea09c32e486f293d325207195eaee1db1117bfac6ad89c5a551" exitCode=0 Jan 09 13:33:55 crc kubenswrapper[4919]: I0109 13:33:55.356485 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qx45q" event={"ID":"1ce56338-b322-46a4-b02c-2ae2b1bb5149","Type":"ContainerDied","Data":"70c4800f54b9cea09c32e486f293d325207195eaee1db1117bfac6ad89c5a551"} Jan 09 13:33:55 crc kubenswrapper[4919]: I0109 13:33:55.365236 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj7bg" event={"ID":"3bdb482c-0d44-43b3-b74f-d0ba01a861b0","Type":"ContainerStarted","Data":"ce894f8334796fdbd85d158a44a057da3822ed76d9d4f803b57cd61d80aa3072"} Jan 09 13:33:55 crc kubenswrapper[4919]: I0109 13:33:55.367943 4919 generic.go:334] "Generic (PLEG): container finished" podID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerID="f6da9fc2fcaaa62b53f662d1ee96c34b89419a98540daf970139e8373082b94c" exitCode=0 Jan 09 13:33:55 crc kubenswrapper[4919]: I0109 13:33:55.367999 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xppnp" event={"ID":"a7ddc148-0c1a-496f-b58b-c88f30af7344","Type":"ContainerDied","Data":"f6da9fc2fcaaa62b53f662d1ee96c34b89419a98540daf970139e8373082b94c"} Jan 09 13:33:55 crc kubenswrapper[4919]: I0109 13:33:55.370493 4919 generic.go:334] "Generic (PLEG): container finished" podID="18b90207-0827-4db3-b0ca-e622b58ed504" containerID="ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2" exitCode=0 Jan 09 13:33:55 crc kubenswrapper[4919]: I0109 13:33:55.370552 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tf6wk" event={"ID":"18b90207-0827-4db3-b0ca-e622b58ed504","Type":"ContainerDied","Data":"ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2"} Jan 09 13:33:55 crc kubenswrapper[4919]: I0109 13:33:55.401244 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bj7bg" podStartSLOduration=3.135117868 podStartE2EDuration="1m5.401203013s" podCreationTimestamp="2026-01-09 13:32:50 +0000 UTC" firstStartedPulling="2026-01-09 13:32:52.539371221 +0000 UTC m=+152.087210671" lastFinishedPulling="2026-01-09 13:33:54.805456366 +0000 UTC m=+214.353295816" observedRunningTime="2026-01-09 13:33:55.400476784 +0000 UTC m=+214.948316234" watchObservedRunningTime="2026-01-09 13:33:55.401203013 +0000 UTC m=+214.949042463" Jan 09 13:33:56 crc kubenswrapper[4919]: I0109 13:33:56.382415 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xppnp" event={"ID":"a7ddc148-0c1a-496f-b58b-c88f30af7344","Type":"ContainerStarted","Data":"92aaeeeb943f031a08e55438ddef3839d2fd685d7eb52d91352133c17f84d9bd"} Jan 09 13:33:56 crc kubenswrapper[4919]: I0109 13:33:56.384734 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tf6wk" event={"ID":"18b90207-0827-4db3-b0ca-e622b58ed504","Type":"ContainerStarted","Data":"bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7"} Jan 09 13:33:56 crc kubenswrapper[4919]: I0109 13:33:56.386248 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qx45q" event={"ID":"1ce56338-b322-46a4-b02c-2ae2b1bb5149","Type":"ContainerStarted","Data":"20569914b88b8dffb208f8d743645f26ec49cb7f5ad5daf956087ce43e69dc76"} Jan 09 13:33:56 crc kubenswrapper[4919]: I0109 13:33:56.411122 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xppnp" podStartSLOduration=2.949671811 podStartE2EDuration="1m8.411104538s" podCreationTimestamp="2026-01-09 13:32:48 +0000 UTC" firstStartedPulling="2026-01-09 13:32:50.390904265 +0000 UTC m=+149.938743715" lastFinishedPulling="2026-01-09 13:33:55.852336992 +0000 UTC m=+215.400176442" observedRunningTime="2026-01-09 13:33:56.407799723 +0000 UTC m=+215.955639173" watchObservedRunningTime="2026-01-09 13:33:56.411104538 +0000 UTC m=+215.958943988" Jan 09 13:33:56 crc kubenswrapper[4919]: I0109 13:33:56.430834 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qx45q" podStartSLOduration=3.283656799 podStartE2EDuration="1m5.430813574s" podCreationTimestamp="2026-01-09 13:32:51 +0000 UTC" firstStartedPulling="2026-01-09 13:32:53.604456687 +0000 UTC m=+153.152296137" lastFinishedPulling="2026-01-09 13:33:55.751613462 +0000 UTC m=+215.299452912" observedRunningTime="2026-01-09 13:33:56.4287022 +0000 UTC m=+215.976541650" watchObservedRunningTime="2026-01-09 13:33:56.430813574 +0000 UTC m=+215.978653024" Jan 09 13:33:56 crc kubenswrapper[4919]: I0109 13:33:56.453844 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tf6wk" podStartSLOduration=3.093136339 podStartE2EDuration="1m8.453820886s" podCreationTimestamp="2026-01-09 13:32:48 +0000 UTC" firstStartedPulling="2026-01-09 13:32:50.411988266 +0000 UTC m=+149.959827716" lastFinishedPulling="2026-01-09 13:33:55.772672813 +0000 UTC m=+215.320512263" observedRunningTime="2026-01-09 13:33:56.450135951 +0000 UTC m=+215.997975401" watchObservedRunningTime="2026-01-09 13:33:56.453820886 +0000 UTC m=+216.001660336" Jan 09 13:33:58 crc kubenswrapper[4919]: I0109 13:33:58.915658 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:33:58 crc kubenswrapper[4919]: I0109 13:33:58.916200 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:33:58 crc kubenswrapper[4919]: I0109 13:33:58.980584 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:33:59 crc kubenswrapper[4919]: I0109 13:33:59.140231 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:33:59 crc kubenswrapper[4919]: I0109 13:33:59.140289 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:33:59 crc kubenswrapper[4919]: I0109 13:33:59.193155 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:33:59 crc kubenswrapper[4919]: I0109 13:33:59.306802 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:33:59 crc kubenswrapper[4919]: I0109 13:33:59.306857 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:33:59 crc kubenswrapper[4919]: I0109 13:33:59.373610 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:33:59 crc kubenswrapper[4919]: I0109 13:33:59.454980 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:33:59 crc kubenswrapper[4919]: I0109 13:33:59.608756 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:33:59 crc kubenswrapper[4919]: I0109 13:33:59.648678 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:34:01 crc kubenswrapper[4919]: I0109 13:34:01.142861 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:34:01 crc kubenswrapper[4919]: I0109 13:34:01.142941 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:34:01 crc kubenswrapper[4919]: I0109 13:34:01.198609 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:34:01 crc kubenswrapper[4919]: I0109 13:34:01.460515 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:34:01 crc kubenswrapper[4919]: I0109 13:34:01.568374 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:34:01 crc kubenswrapper[4919]: I0109 13:34:01.636602 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:34:01 crc kubenswrapper[4919]: I0109 13:34:01.901754 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69xx2"] Jan 09 13:34:01 crc kubenswrapper[4919]: I0109 13:34:01.902004 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-69xx2" podUID="11f13609-8588-44c4-b426-db71e94e93dd" containerName="registry-server" containerID="cri-o://09312b0b662f12d2007060760ee3c2a4f2c6a3111d53b9f5df41b6b631f4b201" gracePeriod=2 Jan 09 13:34:02 crc kubenswrapper[4919]: I0109 13:34:02.087023 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:34:02 crc kubenswrapper[4919]: I0109 13:34:02.087082 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:34:02 crc kubenswrapper[4919]: I0109 13:34:02.132634 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:34:02 crc kubenswrapper[4919]: I0109 13:34:02.474284 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:34:02 crc kubenswrapper[4919]: I0109 13:34:02.550987 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:34:02 crc kubenswrapper[4919]: I0109 13:34:02.617993 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:34:03 crc kubenswrapper[4919]: I0109 13:34:03.438634 4919 generic.go:334] "Generic (PLEG): container finished" podID="11f13609-8588-44c4-b426-db71e94e93dd" containerID="09312b0b662f12d2007060760ee3c2a4f2c6a3111d53b9f5df41b6b631f4b201" exitCode=0 Jan 09 13:34:03 crc kubenswrapper[4919]: I0109 13:34:03.439070 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xx2" event={"ID":"11f13609-8588-44c4-b426-db71e94e93dd","Type":"ContainerDied","Data":"09312b0b662f12d2007060760ee3c2a4f2c6a3111d53b9f5df41b6b631f4b201"} Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.304406 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg7pw"] Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.305358 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dg7pw" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerName="registry-server" containerID="cri-o://3b654d149a5db355e0134087bf5729140791c355e05095d96c65598aa8740795" gracePeriod=2 Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.403540 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.449091 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xx2" event={"ID":"11f13609-8588-44c4-b426-db71e94e93dd","Type":"ContainerDied","Data":"a4d3d8fb93de41510657ba41e306ff1f3e9c2648e5bb666cb9a9720f586d39ec"} Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.449148 4919 scope.go:117] "RemoveContainer" containerID="09312b0b662f12d2007060760ee3c2a4f2c6a3111d53b9f5df41b6b631f4b201" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.449156 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69xx2" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.467009 4919 scope.go:117] "RemoveContainer" containerID="422d2367d00584eaff7db0ea4060d12910f96a51fe167a71259f19639059623d" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.490264 4919 scope.go:117] "RemoveContainer" containerID="c15cf2ae5c5226bce4fe4ede16a8d0e8f89e512de6949bdc1e883ca4a3a02113" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.494334 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqb7t\" (UniqueName: \"kubernetes.io/projected/11f13609-8588-44c4-b426-db71e94e93dd-kube-api-access-pqb7t\") pod \"11f13609-8588-44c4-b426-db71e94e93dd\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.494507 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-utilities\") pod \"11f13609-8588-44c4-b426-db71e94e93dd\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.494555 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-catalog-content\") pod \"11f13609-8588-44c4-b426-db71e94e93dd\" (UID: \"11f13609-8588-44c4-b426-db71e94e93dd\") " Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.495541 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-utilities" (OuterVolumeSpecName: "utilities") pod "11f13609-8588-44c4-b426-db71e94e93dd" (UID: "11f13609-8588-44c4-b426-db71e94e93dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.504429 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f13609-8588-44c4-b426-db71e94e93dd-kube-api-access-pqb7t" (OuterVolumeSpecName: "kube-api-access-pqb7t") pod "11f13609-8588-44c4-b426-db71e94e93dd" (UID: "11f13609-8588-44c4-b426-db71e94e93dd"). InnerVolumeSpecName "kube-api-access-pqb7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.544552 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11f13609-8588-44c4-b426-db71e94e93dd" (UID: "11f13609-8588-44c4-b426-db71e94e93dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.596592 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.596635 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f13609-8588-44c4-b426-db71e94e93dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.596651 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqb7t\" (UniqueName: \"kubernetes.io/projected/11f13609-8588-44c4-b426-db71e94e93dd-kube-api-access-pqb7t\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.790261 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69xx2"] Jan 09 13:34:04 crc kubenswrapper[4919]: I0109 13:34:04.796377 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-69xx2"] Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.457012 4919 generic.go:334] "Generic (PLEG): container finished" podID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerID="3b654d149a5db355e0134087bf5729140791c355e05095d96c65598aa8740795" exitCode=0 Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.457044 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg7pw" event={"ID":"ef0b4efa-7cc4-48d3-be0e-7406620f6a84","Type":"ContainerDied","Data":"3b654d149a5db355e0134087bf5729140791c355e05095d96c65598aa8740795"} Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.530871 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.611773 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-utilities\") pod \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.612292 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsbdz\" (UniqueName: \"kubernetes.io/projected/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-kube-api-access-xsbdz\") pod \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.612332 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-catalog-content\") pod \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\" (UID: \"ef0b4efa-7cc4-48d3-be0e-7406620f6a84\") " Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.612747 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-utilities" (OuterVolumeSpecName: "utilities") pod "ef0b4efa-7cc4-48d3-be0e-7406620f6a84" (UID: "ef0b4efa-7cc4-48d3-be0e-7406620f6a84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.617793 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-kube-api-access-xsbdz" (OuterVolumeSpecName: "kube-api-access-xsbdz") pod "ef0b4efa-7cc4-48d3-be0e-7406620f6a84" (UID: "ef0b4efa-7cc4-48d3-be0e-7406620f6a84"). InnerVolumeSpecName "kube-api-access-xsbdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.634927 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef0b4efa-7cc4-48d3-be0e-7406620f6a84" (UID: "ef0b4efa-7cc4-48d3-be0e-7406620f6a84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.714426 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsbdz\" (UniqueName: \"kubernetes.io/projected/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-kube-api-access-xsbdz\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.714460 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:05 crc kubenswrapper[4919]: I0109 13:34:05.714475 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0b4efa-7cc4-48d3-be0e-7406620f6a84-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.035474 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9"] Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.035684 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" podUID="a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" containerName="controller-manager" containerID="cri-o://341697f9d87f4cb00589f29fe6e619bc41ad943b812d59475110c522fe727dee" gracePeriod=30 Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.130718 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg"] Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.130924 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" podUID="228725da-7a75-4f10-9857-dc893be79fc8" containerName="route-controller-manager" containerID="cri-o://50d5fda6df8ed1f714b828e9705d6045c15c2db1c0888429ec8634c214cac21c" gracePeriod=30 Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.465939 4919 generic.go:334] "Generic (PLEG): container finished" podID="228725da-7a75-4f10-9857-dc893be79fc8" containerID="50d5fda6df8ed1f714b828e9705d6045c15c2db1c0888429ec8634c214cac21c" exitCode=0 Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.466016 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" event={"ID":"228725da-7a75-4f10-9857-dc893be79fc8","Type":"ContainerDied","Data":"50d5fda6df8ed1f714b828e9705d6045c15c2db1c0888429ec8634c214cac21c"} Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.468451 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg7pw" event={"ID":"ef0b4efa-7cc4-48d3-be0e-7406620f6a84","Type":"ContainerDied","Data":"d1d29207addb0e82cf9e707c93398885a05bc47f25b98de20dc836cfd42c34ab"} Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.468490 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg7pw" Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.468512 4919 scope.go:117] "RemoveContainer" containerID="3b654d149a5db355e0134087bf5729140791c355e05095d96c65598aa8740795" Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.470876 4919 generic.go:334] "Generic (PLEG): container finished" podID="a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" containerID="341697f9d87f4cb00589f29fe6e619bc41ad943b812d59475110c522fe727dee" exitCode=0 Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.470914 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" event={"ID":"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a","Type":"ContainerDied","Data":"341697f9d87f4cb00589f29fe6e619bc41ad943b812d59475110c522fe727dee"} Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.509163 4919 scope.go:117] "RemoveContainer" containerID="2d567bc54177af0ce81c71cb038e8723df384b9c99a31a1f13b4bd4ca90af943" Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.520102 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg7pw"] Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.525124 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg7pw"] Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.547367 4919 scope.go:117] "RemoveContainer" containerID="3037e079b10043cc6ecc46197b74cb1b32c040d14d843da38c176a9380297049" Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.697073 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c4dtn"] Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.697344 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c4dtn" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerName="registry-server" containerID="cri-o://2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb" gracePeriod=2 Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.759199 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f13609-8588-44c4-b426-db71e94e93dd" path="/var/lib/kubelet/pods/11f13609-8588-44c4-b426-db71e94e93dd/volumes" Jan 09 13:34:06 crc kubenswrapper[4919]: I0109 13:34:06.759980 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" path="/var/lib/kubelet/pods/ef0b4efa-7cc4-48d3-be0e-7406620f6a84/volumes" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.076590 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.083740 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.123583 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.132305 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-config\") pod \"228725da-7a75-4f10-9857-dc893be79fc8\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.132381 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-config\") pod \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.132414 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxcf5\" (UniqueName: \"kubernetes.io/projected/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-kube-api-access-wxcf5\") pod \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.132457 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228725da-7a75-4f10-9857-dc893be79fc8-serving-cert\") pod \"228725da-7a75-4f10-9857-dc893be79fc8\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.132485 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-proxy-ca-bundles\") pod \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.132537 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-client-ca\") pod \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.132580 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-serving-cert\") pod \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\" (UID: \"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.132607 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-client-ca\") pod \"228725da-7a75-4f10-9857-dc893be79fc8\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.132672 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvp4b\" (UniqueName: \"kubernetes.io/projected/228725da-7a75-4f10-9857-dc893be79fc8-kube-api-access-bvp4b\") pod \"228725da-7a75-4f10-9857-dc893be79fc8\" (UID: \"228725da-7a75-4f10-9857-dc893be79fc8\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.138584 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" (UID: "a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.138758 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/228725da-7a75-4f10-9857-dc893be79fc8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "228725da-7a75-4f10-9857-dc893be79fc8" (UID: "228725da-7a75-4f10-9857-dc893be79fc8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.139273 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" (UID: "a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.139433 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-client-ca" (OuterVolumeSpecName: "client-ca") pod "228725da-7a75-4f10-9857-dc893be79fc8" (UID: "228725da-7a75-4f10-9857-dc893be79fc8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.139492 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-config" (OuterVolumeSpecName: "config") pod "228725da-7a75-4f10-9857-dc893be79fc8" (UID: "228725da-7a75-4f10-9857-dc893be79fc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.139948 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228725da-7a75-4f10-9857-dc893be79fc8-kube-api-access-bvp4b" (OuterVolumeSpecName: "kube-api-access-bvp4b") pod "228725da-7a75-4f10-9857-dc893be79fc8" (UID: "228725da-7a75-4f10-9857-dc893be79fc8"). InnerVolumeSpecName "kube-api-access-bvp4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.141874 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-kube-api-access-wxcf5" (OuterVolumeSpecName: "kube-api-access-wxcf5") pod "a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" (UID: "a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a"). InnerVolumeSpecName "kube-api-access-wxcf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.142588 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" (UID: "a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.143558 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-config" (OuterVolumeSpecName: "config") pod "a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" (UID: "a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.234753 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2469q\" (UniqueName: \"kubernetes.io/projected/870947c0-608c-48f9-a0c7-5f81a08255bf-kube-api-access-2469q\") pod \"870947c0-608c-48f9-a0c7-5f81a08255bf\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.234820 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-catalog-content\") pod \"870947c0-608c-48f9-a0c7-5f81a08255bf\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.234847 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-utilities\") pod \"870947c0-608c-48f9-a0c7-5f81a08255bf\" (UID: \"870947c0-608c-48f9-a0c7-5f81a08255bf\") " Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.235080 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.235094 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvp4b\" (UniqueName: \"kubernetes.io/projected/228725da-7a75-4f10-9857-dc893be79fc8-kube-api-access-bvp4b\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.235137 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228725da-7a75-4f10-9857-dc893be79fc8-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.235147 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.235157 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxcf5\" (UniqueName: \"kubernetes.io/projected/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-kube-api-access-wxcf5\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.235165 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228725da-7a75-4f10-9857-dc893be79fc8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.235173 4919 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.235181 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.235189 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.236551 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-utilities" (OuterVolumeSpecName: "utilities") pod "870947c0-608c-48f9-a0c7-5f81a08255bf" (UID: "870947c0-608c-48f9-a0c7-5f81a08255bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.237330 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870947c0-608c-48f9-a0c7-5f81a08255bf-kube-api-access-2469q" (OuterVolumeSpecName: "kube-api-access-2469q") pod "870947c0-608c-48f9-a0c7-5f81a08255bf" (UID: "870947c0-608c-48f9-a0c7-5f81a08255bf"). InnerVolumeSpecName "kube-api-access-2469q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.336485 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2469q\" (UniqueName: \"kubernetes.io/projected/870947c0-608c-48f9-a0c7-5f81a08255bf-kube-api-access-2469q\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.336544 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.391329 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "870947c0-608c-48f9-a0c7-5f81a08255bf" (UID: "870947c0-608c-48f9-a0c7-5f81a08255bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.437348 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/870947c0-608c-48f9-a0c7-5f81a08255bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.480421 4919 generic.go:334] "Generic (PLEG): container finished" podID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerID="2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb" exitCode=0 Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.480479 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4dtn" event={"ID":"870947c0-608c-48f9-a0c7-5f81a08255bf","Type":"ContainerDied","Data":"2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb"} Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.480503 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c4dtn" event={"ID":"870947c0-608c-48f9-a0c7-5f81a08255bf","Type":"ContainerDied","Data":"8ffe630e2813fdae6115154bd3c9cdb051cfdfb546c89d169fe27ea77d3bd579"} Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.480521 4919 scope.go:117] "RemoveContainer" containerID="2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.480548 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c4dtn" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.483634 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" event={"ID":"a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a","Type":"ContainerDied","Data":"4fb1ac1055ede55a224b3155ead09b0a9f53fe2958cc51a78735196859b735f6"} Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.483712 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.488133 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" event={"ID":"228725da-7a75-4f10-9857-dc893be79fc8","Type":"ContainerDied","Data":"cf7372cea6a4801bbef0f1f0a4fd74959f86c721b0a1d2a8144a78be52fb4a11"} Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.488263 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.500744 4919 scope.go:117] "RemoveContainer" containerID="aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.519753 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9"] Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.523603 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b4fcf649d-jt4m9"] Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.527499 4919 scope.go:117] "RemoveContainer" containerID="202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.542504 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg"] Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.549301 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b4b75f447-brtrg"] Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.553341 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c4dtn"] Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.557358 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c4dtn"] Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.563096 4919 scope.go:117] "RemoveContainer" containerID="2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.563597 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb\": container with ID starting with 2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb not found: ID does not exist" containerID="2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.563647 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb"} err="failed to get container status \"2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb\": rpc error: code = NotFound desc = could not find container \"2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb\": container with ID starting with 2c193d34a2b3ff3a626b06d2705d139ede2dc521122924e7a5892646345b19cb not found: ID does not exist" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.563680 4919 scope.go:117] "RemoveContainer" containerID="aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.564364 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07\": container with ID starting with aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07 not found: ID does not exist" containerID="aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.564405 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07"} err="failed to get container status \"aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07\": rpc error: code = NotFound desc = could not find container \"aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07\": container with ID starting with aa8872eec28800c7f47d431d85efe3a26c5c09c3259b927e854c5a0c9d7bcb07 not found: ID does not exist" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.564429 4919 scope.go:117] "RemoveContainer" containerID="202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.564761 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b\": container with ID starting with 202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b not found: ID does not exist" containerID="202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.564788 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b"} err="failed to get container status \"202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b\": rpc error: code = NotFound desc = could not find container \"202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b\": container with ID starting with 202626ff2155701c3aba3c39e84396f7a4f2ccc9c54315dfef778ba0d2c3406b not found: ID does not exist" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.564803 4919 scope.go:117] "RemoveContainer" containerID="341697f9d87f4cb00589f29fe6e619bc41ad943b812d59475110c522fe727dee" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.589203 4919 scope.go:117] "RemoveContainer" containerID="50d5fda6df8ed1f714b828e9705d6045c15c2db1c0888429ec8634c214cac21c" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.832821 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx"] Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833197 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerName="extract-content" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833270 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerName="extract-content" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833297 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f13609-8588-44c4-b426-db71e94e93dd" containerName="extract-utilities" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833311 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f13609-8588-44c4-b426-db71e94e93dd" containerName="extract-utilities" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833332 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerName="extract-content" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833344 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerName="extract-content" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833360 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerName="extract-utilities" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833372 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerName="extract-utilities" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833390 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f13609-8588-44c4-b426-db71e94e93dd" containerName="registry-server" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833402 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f13609-8588-44c4-b426-db71e94e93dd" containerName="registry-server" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833420 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f13609-8588-44c4-b426-db71e94e93dd" containerName="extract-content" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833431 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f13609-8588-44c4-b426-db71e94e93dd" containerName="extract-content" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833451 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerName="registry-server" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833463 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerName="registry-server" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833479 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerName="registry-server" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833490 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerName="registry-server" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833509 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228725da-7a75-4f10-9857-dc893be79fc8" containerName="route-controller-manager" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833521 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="228725da-7a75-4f10-9857-dc893be79fc8" containerName="route-controller-manager" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833542 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerName="extract-utilities" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833554 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerName="extract-utilities" Jan 09 13:34:07 crc kubenswrapper[4919]: E0109 13:34:07.833572 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" containerName="controller-manager" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.833585 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" containerName="controller-manager" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.834335 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" containerName="registry-server" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.834358 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" containerName="controller-manager" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.834374 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="228725da-7a75-4f10-9857-dc893be79fc8" containerName="route-controller-manager" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.834395 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0b4efa-7cc4-48d3-be0e-7406620f6a84" containerName="registry-server" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.834414 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f13609-8588-44c4-b426-db71e94e93dd" containerName="registry-server" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.835081 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.838311 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.838440 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.838771 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.840119 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.840303 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.841079 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.851483 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx"] Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.943003 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-serving-cert\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.943121 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-config\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.943283 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9sml\" (UniqueName: \"kubernetes.io/projected/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-kube-api-access-d9sml\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:07 crc kubenswrapper[4919]: I0109 13:34:07.943482 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-client-ca\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.045072 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-serving-cert\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.045170 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-config\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.045269 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9sml\" (UniqueName: \"kubernetes.io/projected/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-kube-api-access-d9sml\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.045313 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-client-ca\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.047631 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-client-ca\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.048541 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-config\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.053640 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-serving-cert\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.065193 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9sml\" (UniqueName: \"kubernetes.io/projected/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-kube-api-access-d9sml\") pod \"route-controller-manager-7bcc8b7969-qxdrx\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.167868 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.434005 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx"] Jan 09 13:34:08 crc kubenswrapper[4919]: W0109 13:34:08.445263 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9828d19_9e98_4c59_bfa9_0b0ffbb1c1c7.slice/crio-2bf57b016ba4ad6173d7702030100e7cda337cc55b9b8d66baddd98c5cfb0b61 WatchSource:0}: Error finding container 2bf57b016ba4ad6173d7702030100e7cda337cc55b9b8d66baddd98c5cfb0b61: Status 404 returned error can't find the container with id 2bf57b016ba4ad6173d7702030100e7cda337cc55b9b8d66baddd98c5cfb0b61 Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.524249 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" event={"ID":"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7","Type":"ContainerStarted","Data":"2bf57b016ba4ad6173d7702030100e7cda337cc55b9b8d66baddd98c5cfb0b61"} Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.764039 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228725da-7a75-4f10-9857-dc893be79fc8" path="/var/lib/kubelet/pods/228725da-7a75-4f10-9857-dc893be79fc8/volumes" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.765162 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870947c0-608c-48f9-a0c7-5f81a08255bf" path="/var/lib/kubelet/pods/870947c0-608c-48f9-a0c7-5f81a08255bf/volumes" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.768377 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a" path="/var/lib/kubelet/pods/a170f18a-ad98-41ad-87b9-b4d4ce0a7e0a/volumes" Jan 09 13:34:08 crc kubenswrapper[4919]: I0109 13:34:08.975250 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.356370 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.532203 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" event={"ID":"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7","Type":"ContainerStarted","Data":"86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5"} Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.532783 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.538538 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.589543 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" podStartSLOduration=3.589511548 podStartE2EDuration="3.589511548s" podCreationTimestamp="2026-01-09 13:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:34:09.561620721 +0000 UTC m=+229.109460171" watchObservedRunningTime="2026-01-09 13:34:09.589511548 +0000 UTC m=+229.137350998" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.827119 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59d48cf488-dxjfh"] Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.828062 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.829790 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.830101 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.830225 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.830531 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.830668 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.830792 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.840350 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.840939 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59d48cf488-dxjfh"] Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.877999 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-config\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.878086 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-client-ca\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.878251 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-proxy-ca-bundles\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.878292 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-serving-cert\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.878309 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svfj2\" (UniqueName: \"kubernetes.io/projected/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-kube-api-access-svfj2\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.980005 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-proxy-ca-bundles\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.980063 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-serving-cert\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.980089 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svfj2\" (UniqueName: \"kubernetes.io/projected/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-kube-api-access-svfj2\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.980120 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-config\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.980156 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-client-ca\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.981097 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-client-ca\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.981429 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-proxy-ca-bundles\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.981919 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-config\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.992924 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-serving-cert\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:09 crc kubenswrapper[4919]: I0109 13:34:09.995759 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svfj2\" (UniqueName: \"kubernetes.io/projected/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-kube-api-access-svfj2\") pod \"controller-manager-59d48cf488-dxjfh\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:10 crc kubenswrapper[4919]: I0109 13:34:10.193322 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:10 crc kubenswrapper[4919]: I0109 13:34:10.406863 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59d48cf488-dxjfh"] Jan 09 13:34:10 crc kubenswrapper[4919]: I0109 13:34:10.540583 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" event={"ID":"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc","Type":"ContainerStarted","Data":"110943cf4a9a5ba7c62178f34ee5fcf32c6e9eefba5cef10b0e6845413ca467e"} Jan 09 13:34:12 crc kubenswrapper[4919]: I0109 13:34:12.340648 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2tz5"] Jan 09 13:34:12 crc kubenswrapper[4919]: I0109 13:34:12.553607 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" event={"ID":"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc","Type":"ContainerStarted","Data":"2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030"} Jan 09 13:34:12 crc kubenswrapper[4919]: I0109 13:34:12.553936 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:12 crc kubenswrapper[4919]: I0109 13:34:12.558479 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:34:12 crc kubenswrapper[4919]: I0109 13:34:12.572323 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" podStartSLOduration=6.572303144 podStartE2EDuration="6.572303144s" podCreationTimestamp="2026-01-09 13:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:34:12.571657417 +0000 UTC m=+232.119496877" watchObservedRunningTime="2026-01-09 13:34:12.572303144 +0000 UTC m=+232.120142594" Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.298641 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xppnp"] Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.299331 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xppnp" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerName="registry-server" containerID="cri-o://92aaeeeb943f031a08e55438ddef3839d2fd685d7eb52d91352133c17f84d9bd" gracePeriod=2 Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.570694 4919 generic.go:334] "Generic (PLEG): container finished" podID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerID="92aaeeeb943f031a08e55438ddef3839d2fd685d7eb52d91352133c17f84d9bd" exitCode=0 Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.570994 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xppnp" event={"ID":"a7ddc148-0c1a-496f-b58b-c88f30af7344","Type":"ContainerDied","Data":"92aaeeeb943f031a08e55438ddef3839d2fd685d7eb52d91352133c17f84d9bd"} Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.729469 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.839257 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd7pl\" (UniqueName: \"kubernetes.io/projected/a7ddc148-0c1a-496f-b58b-c88f30af7344-kube-api-access-zd7pl\") pod \"a7ddc148-0c1a-496f-b58b-c88f30af7344\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.839357 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-catalog-content\") pod \"a7ddc148-0c1a-496f-b58b-c88f30af7344\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.839392 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-utilities\") pod \"a7ddc148-0c1a-496f-b58b-c88f30af7344\" (UID: \"a7ddc148-0c1a-496f-b58b-c88f30af7344\") " Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.841492 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-utilities" (OuterVolumeSpecName: "utilities") pod "a7ddc148-0c1a-496f-b58b-c88f30af7344" (UID: "a7ddc148-0c1a-496f-b58b-c88f30af7344"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.844706 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ddc148-0c1a-496f-b58b-c88f30af7344-kube-api-access-zd7pl" (OuterVolumeSpecName: "kube-api-access-zd7pl") pod "a7ddc148-0c1a-496f-b58b-c88f30af7344" (UID: "a7ddc148-0c1a-496f-b58b-c88f30af7344"). InnerVolumeSpecName "kube-api-access-zd7pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.894942 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7ddc148-0c1a-496f-b58b-c88f30af7344" (UID: "a7ddc148-0c1a-496f-b58b-c88f30af7344"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.941597 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd7pl\" (UniqueName: \"kubernetes.io/projected/a7ddc148-0c1a-496f-b58b-c88f30af7344-kube-api-access-zd7pl\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.941641 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:13 crc kubenswrapper[4919]: I0109 13:34:13.941654 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ddc148-0c1a-496f-b58b-c88f30af7344-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:14 crc kubenswrapper[4919]: I0109 13:34:14.584040 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xppnp" Jan 09 13:34:14 crc kubenswrapper[4919]: I0109 13:34:14.584020 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xppnp" event={"ID":"a7ddc148-0c1a-496f-b58b-c88f30af7344","Type":"ContainerDied","Data":"c6abf321087e9923404c9b0e1c0b27621a378ef1f35c204a159ed07579c5bc6c"} Jan 09 13:34:14 crc kubenswrapper[4919]: I0109 13:34:14.584176 4919 scope.go:117] "RemoveContainer" containerID="92aaeeeb943f031a08e55438ddef3839d2fd685d7eb52d91352133c17f84d9bd" Jan 09 13:34:14 crc kubenswrapper[4919]: I0109 13:34:14.611771 4919 scope.go:117] "RemoveContainer" containerID="f6da9fc2fcaaa62b53f662d1ee96c34b89419a98540daf970139e8373082b94c" Jan 09 13:34:14 crc kubenswrapper[4919]: I0109 13:34:14.637620 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xppnp"] Jan 09 13:34:14 crc kubenswrapper[4919]: I0109 13:34:14.647571 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xppnp"] Jan 09 13:34:14 crc kubenswrapper[4919]: I0109 13:34:14.663566 4919 scope.go:117] "RemoveContainer" containerID="3c8b486232c355c0cfbdaea48ccbacb9498cfb7baf0f733a13f25f85ecd1e6f3" Jan 09 13:34:14 crc kubenswrapper[4919]: I0109 13:34:14.763993 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" path="/var/lib/kubelet/pods/a7ddc148-0c1a-496f-b58b-c88f30af7344/volumes" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.619559 4919 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.620446 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerName="extract-utilities" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.620468 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerName="extract-utilities" Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.620483 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerName="extract-content" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.620492 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerName="extract-content" Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.620506 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerName="registry-server" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.620517 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerName="registry-server" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.620635 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ddc148-0c1a-496f-b58b-c88f30af7344" containerName="registry-server" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.621157 4919 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.621398 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.621568 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa" gracePeriod=15 Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.621603 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c" gracePeriod=15 Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.621751 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6" gracePeriod=15 Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.621813 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6" gracePeriod=15 Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.621786 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87" gracePeriod=15 Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.623658 4919 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.648778 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.648835 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.648867 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.648874 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.648897 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.648909 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.648925 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.648932 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.648944 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.648951 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.648970 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.648976 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.649004 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.649013 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.649561 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.649581 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.649595 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.649611 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.651432 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.651452 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.674354 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.674499 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.674528 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.674583 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.674601 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.689196 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.775811 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.775712 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.775917 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.775995 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.776023 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.776052 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.776114 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.776243 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.776272 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.776328 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.776405 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.776510 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.776610 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.878462 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.878744 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.878828 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.878884 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.879047 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.879792 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: I0109 13:34:15.965870 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:34:15 crc kubenswrapper[4919]: E0109 13:34:15.991786 4919 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18891365a464da7a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-09 13:34:15.99032793 +0000 UTC m=+235.538167420,LastTimestamp:2026-01-09 13:34:15.99032793 +0000 UTC m=+235.538167420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.603139 4919 generic.go:334] "Generic (PLEG): container finished" podID="3d7b247c-486d-49ca-b26c-d25bca0471bc" containerID="bb59e6036117b013224cd1fff3680355697ce2fe509a1e11f5e1b3d6a4a4e165" exitCode=0 Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.603260 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d7b247c-486d-49ca-b26c-d25bca0471bc","Type":"ContainerDied","Data":"bb59e6036117b013224cd1fff3680355697ce2fe509a1e11f5e1b3d6a4a4e165"} Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.605508 4919 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.606331 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d1d64207abd9195331feab345729908ba8fd3a4370f7ea74b73f339c6b065729"} Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.606395 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"056ed8b50fa0021009939709b2c5d5d60114d803ace9bbeb52931a76b711e5e7"} Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.606351 4919 status_manager.go:851] "Failed to get status for pod" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.607368 4919 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.607835 4919 status_manager.go:851] "Failed to get status for pod" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.608375 4919 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.608932 4919 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.610794 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.612813 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.613770 4919 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c" exitCode=0 Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.613812 4919 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6" exitCode=0 Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.613827 4919 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87" exitCode=0 Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.613844 4919 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6" exitCode=2 Jan 09 13:34:16 crc kubenswrapper[4919]: I0109 13:34:16.613877 4919 scope.go:117] "RemoveContainer" containerID="66d7857918228a0905bb39cd161f6ba21561cf49ce7980a1835e5b3cc68cef3c" Jan 09 13:34:17 crc kubenswrapper[4919]: I0109 13:34:17.631282 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 13:34:17 crc kubenswrapper[4919]: E0109 13:34:17.766374 4919 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:17 crc kubenswrapper[4919]: E0109 13:34:17.766960 4919 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:17 crc kubenswrapper[4919]: E0109 13:34:17.767464 4919 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:17 crc kubenswrapper[4919]: E0109 13:34:17.769918 4919 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:17 crc kubenswrapper[4919]: E0109 13:34:17.773473 4919 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:17 crc kubenswrapper[4919]: I0109 13:34:17.773591 4919 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 09 13:34:17 crc kubenswrapper[4919]: E0109 13:34:17.773846 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="200ms" Jan 09 13:34:17 crc kubenswrapper[4919]: E0109 13:34:17.974802 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="400ms" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.114373 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.115567 4919 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.116268 4919 status_manager.go:851] "Failed to get status for pod" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.120893 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.122245 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.122833 4919 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.123646 4919 status_manager.go:851] "Failed to get status for pod" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.124091 4919 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.219956 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.220017 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.220033 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.220058 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-var-lock\") pod \"3d7b247c-486d-49ca-b26c-d25bca0471bc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.220118 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-var-lock" (OuterVolumeSpecName: "var-lock") pod "3d7b247c-486d-49ca-b26c-d25bca0471bc" (UID: "3d7b247c-486d-49ca-b26c-d25bca0471bc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.220166 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.220185 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.220291 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.220305 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-kubelet-dir\") pod \"3d7b247c-486d-49ca-b26c-d25bca0471bc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.220371 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d7b247c-486d-49ca-b26c-d25bca0471bc-kube-api-access\") pod \"3d7b247c-486d-49ca-b26c-d25bca0471bc\" (UID: \"3d7b247c-486d-49ca-b26c-d25bca0471bc\") " Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.220419 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3d7b247c-486d-49ca-b26c-d25bca0471bc" (UID: "3d7b247c-486d-49ca-b26c-d25bca0471bc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.221021 4919 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.221041 4919 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.221057 4919 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.221068 4919 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d7b247c-486d-49ca-b26c-d25bca0471bc-var-lock\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.221080 4919 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.231088 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7b247c-486d-49ca-b26c-d25bca0471bc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d7b247c-486d-49ca-b26c-d25bca0471bc" (UID: "3d7b247c-486d-49ca-b26c-d25bca0471bc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.323484 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d7b247c-486d-49ca-b26c-d25bca0471bc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:18 crc kubenswrapper[4919]: E0109 13:34:18.376531 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="800ms" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.641607 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d7b247c-486d-49ca-b26c-d25bca0471bc","Type":"ContainerDied","Data":"26594057a8c2a29b268d6169e35be14ffa3c80af15ec7afb78023e6ee589d0d0"} Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.641674 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.641684 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26594057a8c2a29b268d6169e35be14ffa3c80af15ec7afb78023e6ee589d0d0" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.645916 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.647166 4919 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa" exitCode=0 Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.647289 4919 scope.go:117] "RemoveContainer" containerID="86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.647577 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.673404 4919 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.674326 4919 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.675023 4919 status_manager.go:851] "Failed to get status for pod" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.678080 4919 scope.go:117] "RemoveContainer" containerID="1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.685681 4919 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.686447 4919 status_manager.go:851] "Failed to get status for pod" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.687478 4919 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.701785 4919 scope.go:117] "RemoveContainer" containerID="d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.725292 4919 scope.go:117] "RemoveContainer" containerID="8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.750603 4919 scope.go:117] "RemoveContainer" containerID="903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.767991 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.776888 4919 scope.go:117] "RemoveContainer" containerID="23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.816060 4919 scope.go:117] "RemoveContainer" containerID="86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c" Jan 09 13:34:18 crc kubenswrapper[4919]: E0109 13:34:18.817384 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\": container with ID starting with 86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c not found: ID does not exist" containerID="86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.817427 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c"} err="failed to get container status \"86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\": rpc error: code = NotFound desc = could not find container \"86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c\": container with ID starting with 86b3f68aa173942f9d95a3832234e0bbd5ccacf63cd73b8d110708ebd0978a4c not found: ID does not exist" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.817475 4919 scope.go:117] "RemoveContainer" containerID="1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6" Jan 09 13:34:18 crc kubenswrapper[4919]: E0109 13:34:18.818471 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\": container with ID starting with 1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6 not found: ID does not exist" containerID="1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.818502 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6"} err="failed to get container status \"1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\": rpc error: code = NotFound desc = could not find container \"1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6\": container with ID starting with 1b2d0c02e83be85aa5925b8f607809b6bbf3db3188f4e165ca42eb04fe694da6 not found: ID does not exist" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.818524 4919 scope.go:117] "RemoveContainer" containerID="d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87" Jan 09 13:34:18 crc kubenswrapper[4919]: E0109 13:34:18.819097 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\": container with ID starting with d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87 not found: ID does not exist" containerID="d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.819179 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87"} err="failed to get container status \"d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\": rpc error: code = NotFound desc = could not find container \"d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87\": container with ID starting with d3759051eb43a6806b9525498f376abc060e88ba0d7bc9274e84eaabfdaefd87 not found: ID does not exist" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.819479 4919 scope.go:117] "RemoveContainer" containerID="8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6" Jan 09 13:34:18 crc kubenswrapper[4919]: E0109 13:34:18.820438 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\": container with ID starting with 8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6 not found: ID does not exist" containerID="8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.820538 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6"} err="failed to get container status \"8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\": rpc error: code = NotFound desc = could not find container \"8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6\": container with ID starting with 8db1dada305e10f10cb4fe92cfaa3fc88e2e2ec3338fa31efc240af0888f60c6 not found: ID does not exist" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.820571 4919 scope.go:117] "RemoveContainer" containerID="903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa" Jan 09 13:34:18 crc kubenswrapper[4919]: E0109 13:34:18.821074 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\": container with ID starting with 903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa not found: ID does not exist" containerID="903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.821137 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa"} err="failed to get container status \"903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\": rpc error: code = NotFound desc = could not find container \"903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa\": container with ID starting with 903fbe8270e684937c51c54aeaaaaf7cfed5e77f6d76d5874210ffcf608a9afa not found: ID does not exist" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.821162 4919 scope.go:117] "RemoveContainer" containerID="23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50" Jan 09 13:34:18 crc kubenswrapper[4919]: E0109 13:34:18.823140 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\": container with ID starting with 23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50 not found: ID does not exist" containerID="23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50" Jan 09 13:34:18 crc kubenswrapper[4919]: I0109 13:34:18.823205 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50"} err="failed to get container status \"23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\": rpc error: code = NotFound desc = could not find container \"23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50\": container with ID starting with 23485bb12981f45b51ee34edf82dbfba54daf74d303f20dba8afb1b24f8e5e50 not found: ID does not exist" Jan 09 13:34:19 crc kubenswrapper[4919]: E0109 13:34:19.179471 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="1.6s" Jan 09 13:34:20 crc kubenswrapper[4919]: I0109 13:34:20.769081 4919 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:20 crc kubenswrapper[4919]: I0109 13:34:20.769718 4919 status_manager.go:851] "Failed to get status for pod" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:20 crc kubenswrapper[4919]: E0109 13:34:20.781196 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="3.2s" Jan 09 13:34:22 crc kubenswrapper[4919]: E0109 13:34:22.952634 4919 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18891365a464da7a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-09 13:34:15.99032793 +0000 UTC m=+235.538167420,LastTimestamp:2026-01-09 13:34:15.99032793 +0000 UTC m=+235.538167420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 09 13:34:23 crc kubenswrapper[4919]: E0109 13:34:23.982492 4919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="6.4s" Jan 09 13:34:26 crc kubenswrapper[4919]: I0109 13:34:26.752492 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:26 crc kubenswrapper[4919]: I0109 13:34:26.756092 4919 status_manager.go:851] "Failed to get status for pod" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:26 crc kubenswrapper[4919]: I0109 13:34:26.757347 4919 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:26 crc kubenswrapper[4919]: I0109 13:34:26.773927 4919 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a055b487-5d63-4265-ac12-735612354e73" Jan 09 13:34:26 crc kubenswrapper[4919]: I0109 13:34:26.773977 4919 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a055b487-5d63-4265-ac12-735612354e73" Jan 09 13:34:26 crc kubenswrapper[4919]: E0109 13:34:26.774676 4919 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:26 crc kubenswrapper[4919]: I0109 13:34:26.775388 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:27 crc kubenswrapper[4919]: I0109 13:34:27.726457 4919 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="0fae8265377cec02444835acb34532677569088937cdf7cfe051abf8cb30ce82" exitCode=0 Jan 09 13:34:27 crc kubenswrapper[4919]: I0109 13:34:27.726636 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"0fae8265377cec02444835acb34532677569088937cdf7cfe051abf8cb30ce82"} Jan 09 13:34:27 crc kubenswrapper[4919]: I0109 13:34:27.727080 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"73de92a9404d121f93564e03277d21c6abca9b900533161e27cb781712f0cd6d"} Jan 09 13:34:27 crc kubenswrapper[4919]: I0109 13:34:27.727671 4919 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a055b487-5d63-4265-ac12-735612354e73" Jan 09 13:34:27 crc kubenswrapper[4919]: I0109 13:34:27.727705 4919 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a055b487-5d63-4265-ac12-735612354e73" Jan 09 13:34:27 crc kubenswrapper[4919]: E0109 13:34:27.728454 4919 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:27 crc kubenswrapper[4919]: I0109 13:34:27.729150 4919 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:27 crc kubenswrapper[4919]: I0109 13:34:27.730067 4919 status_manager.go:851] "Failed to get status for pod" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 09 13:34:28 crc kubenswrapper[4919]: I0109 13:34:28.736321 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"333301c17769d1f3292aaa1a6a0c6ec24e1163e1effe3e55c76e7cd3a98d4393"} Jan 09 13:34:28 crc kubenswrapper[4919]: I0109 13:34:28.736951 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f27ec9871212ce85e4f9adcbf1640c610bd48a7647bb6d73b75ab28a6728b665"} Jan 09 13:34:29 crc kubenswrapper[4919]: I0109 13:34:29.759072 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1676b82dbc2964fb02b0e79ef87ba2a3b1e0fdb93329aaf840bcf63e3ae46ef4"} Jan 09 13:34:29 crc kubenswrapper[4919]: I0109 13:34:29.759522 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ac3cf810b3e0753beb5541ffdd8947bf3ea44d305d53ef461b08b19d7def9712"} Jan 09 13:34:29 crc kubenswrapper[4919]: I0109 13:34:29.759535 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3e2311c6a71077228f6ed479611eafc89b60785f2f98b5e6b8cfe1db3c2fc707"} Jan 09 13:34:29 crc kubenswrapper[4919]: I0109 13:34:29.759699 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:29 crc kubenswrapper[4919]: I0109 13:34:29.759812 4919 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a055b487-5d63-4265-ac12-735612354e73" Jan 09 13:34:29 crc kubenswrapper[4919]: I0109 13:34:29.759844 4919 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a055b487-5d63-4265-ac12-735612354e73" Jan 09 13:34:30 crc kubenswrapper[4919]: I0109 13:34:30.768106 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 09 13:34:30 crc kubenswrapper[4919]: I0109 13:34:30.768167 4919 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83" exitCode=1 Jan 09 13:34:30 crc kubenswrapper[4919]: I0109 13:34:30.768243 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83"} Jan 09 13:34:30 crc kubenswrapper[4919]: I0109 13:34:30.768877 4919 scope.go:117] "RemoveContainer" containerID="210c615c5bcdaca954329d575b66f271921f54cd58ff6d75a90d5fc64bd11d83" Jan 09 13:34:30 crc kubenswrapper[4919]: I0109 13:34:30.998373 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:34:31 crc kubenswrapper[4919]: I0109 13:34:31.775653 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:31 crc kubenswrapper[4919]: I0109 13:34:31.776236 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:31 crc kubenswrapper[4919]: I0109 13:34:31.779119 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 09 13:34:31 crc kubenswrapper[4919]: I0109 13:34:31.779179 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b1f18fffc262c63c940201e123e793409ca2ad4043d01cd826cb668c4f0028d"} Jan 09 13:34:31 crc kubenswrapper[4919]: I0109 13:34:31.783924 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:34 crc kubenswrapper[4919]: I0109 13:34:34.777476 4919 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:34 crc kubenswrapper[4919]: I0109 13:34:34.802791 4919 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a055b487-5d63-4265-ac12-735612354e73" Jan 09 13:34:34 crc kubenswrapper[4919]: I0109 13:34:34.802843 4919 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a055b487-5d63-4265-ac12-735612354e73" Jan 09 13:34:34 crc kubenswrapper[4919]: I0109 13:34:34.809803 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:34 crc kubenswrapper[4919]: I0109 13:34:34.849630 4919 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bedbb923-b55f-4a78-ab15-4bdc05f6b622" Jan 09 13:34:35 crc kubenswrapper[4919]: I0109 13:34:35.808279 4919 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a055b487-5d63-4265-ac12-735612354e73" Jan 09 13:34:35 crc kubenswrapper[4919]: I0109 13:34:35.808306 4919 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a055b487-5d63-4265-ac12-735612354e73" Jan 09 13:34:35 crc kubenswrapper[4919]: I0109 13:34:35.818503 4919 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bedbb923-b55f-4a78-ab15-4bdc05f6b622" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.390530 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" podUID="0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" containerName="oauth-openshift" containerID="cri-o://e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2" gracePeriod=15 Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.475430 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.479724 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.816739 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.823045 4919 generic.go:334] "Generic (PLEG): container finished" podID="0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" containerID="e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2" exitCode=0 Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.823841 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.823996 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" event={"ID":"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403","Type":"ContainerDied","Data":"e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2"} Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.824019 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2tz5" event={"ID":"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403","Type":"ContainerDied","Data":"87577580de755238075e78750f0f03636007e0259d098a8f1ae2bed732b9fed1"} Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.824036 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.824056 4919 scope.go:117] "RemoveContainer" containerID="e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.844913 4919 scope.go:117] "RemoveContainer" containerID="e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2" Jan 09 13:34:37 crc kubenswrapper[4919]: E0109 13:34:37.845522 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2\": container with ID starting with e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2 not found: ID does not exist" containerID="e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.845572 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2"} err="failed to get container status \"e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2\": rpc error: code = NotFound desc = could not find container \"e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2\": container with ID starting with e0c06a08106cf370a189559436e74217b9819e47fef00ac75e69846f6e0e62e2 not found: ID does not exist" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.922103 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-serving-cert\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923096 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-cliconfig\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923128 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-idp-0-file-data\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923198 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-session\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923234 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-ocp-branding-template\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923277 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-service-ca\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923317 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-trusted-ca-bundle\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923336 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-dir\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923358 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw5wl\" (UniqueName: \"kubernetes.io/projected/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-kube-api-access-gw5wl\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923382 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-policies\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923414 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-error\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923449 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-login\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923467 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-router-certs\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.923512 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-provider-selection\") pod \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\" (UID: \"0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403\") " Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.924313 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.925135 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.925625 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.925694 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.925991 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.929449 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.929721 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-kube-api-access-gw5wl" (OuterVolumeSpecName: "kube-api-access-gw5wl") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "kube-api-access-gw5wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.935057 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.935291 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.935438 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.935680 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.935701 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.936067 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:34:37 crc kubenswrapper[4919]: I0109 13:34:37.936264 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" (UID: "0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025343 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw5wl\" (UniqueName: \"kubernetes.io/projected/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-kube-api-access-gw5wl\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025398 4919 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025417 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025436 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025452 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025496 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025509 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025518 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025526 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025534 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025543 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025552 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025561 4919 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:38 crc kubenswrapper[4919]: I0109 13:34:38.025570 4919 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 09 13:34:41 crc kubenswrapper[4919]: I0109 13:34:41.112507 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 09 13:34:41 crc kubenswrapper[4919]: I0109 13:34:41.578982 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 09 13:34:41 crc kubenswrapper[4919]: I0109 13:34:41.672026 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 09 13:34:42 crc kubenswrapper[4919]: I0109 13:34:42.904520 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 09 13:34:43 crc kubenswrapper[4919]: I0109 13:34:43.229706 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 09 13:34:44 crc kubenswrapper[4919]: I0109 13:34:44.651832 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 09 13:34:45 crc kubenswrapper[4919]: I0109 13:34:45.286565 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 09 13:34:45 crc kubenswrapper[4919]: I0109 13:34:45.665386 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 09 13:34:46 crc kubenswrapper[4919]: I0109 13:34:46.213152 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 09 13:34:46 crc kubenswrapper[4919]: I0109 13:34:46.233451 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 09 13:34:46 crc kubenswrapper[4919]: I0109 13:34:46.276362 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 09 13:34:46 crc kubenswrapper[4919]: I0109 13:34:46.355000 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 09 13:34:46 crc kubenswrapper[4919]: I0109 13:34:46.764033 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:46.999964 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.094402 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.235126 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.278178 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.321914 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.331733 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.753493 4919 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.757721 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=32.757701454 podStartE2EDuration="32.757701454s" podCreationTimestamp="2026-01-09 13:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:34:34.795786821 +0000 UTC m=+254.343626271" watchObservedRunningTime="2026-01-09 13:34:47.757701454 +0000 UTC m=+267.305540904" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.758225 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-s2tz5"] Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.758300 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.762097 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.777948 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.777931772 podStartE2EDuration="13.777931772s" podCreationTimestamp="2026-01-09 13:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:34:47.775592419 +0000 UTC m=+267.323431869" watchObservedRunningTime="2026-01-09 13:34:47.777931772 +0000 UTC m=+267.325771222" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.923921 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 09 13:34:47 crc kubenswrapper[4919]: I0109 13:34:47.943554 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 13:34:48 crc kubenswrapper[4919]: I0109 13:34:48.063525 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 09 13:34:48 crc kubenswrapper[4919]: I0109 13:34:48.313725 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 09 13:34:48 crc kubenswrapper[4919]: I0109 13:34:48.575468 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 09 13:34:48 crc kubenswrapper[4919]: I0109 13:34:48.739530 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 09 13:34:48 crc kubenswrapper[4919]: I0109 13:34:48.758721 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" path="/var/lib/kubelet/pods/0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403/volumes" Jan 09 13:34:48 crc kubenswrapper[4919]: I0109 13:34:48.770893 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 09 13:34:48 crc kubenswrapper[4919]: I0109 13:34:48.841300 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 09 13:34:48 crc kubenswrapper[4919]: I0109 13:34:48.899464 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 09 13:34:49 crc kubenswrapper[4919]: I0109 13:34:49.001601 4919 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 09 13:34:49 crc kubenswrapper[4919]: I0109 13:34:49.042992 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 09 13:34:49 crc kubenswrapper[4919]: I0109 13:34:49.058593 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 09 13:34:49 crc kubenswrapper[4919]: I0109 13:34:49.301151 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 13:34:49 crc kubenswrapper[4919]: I0109 13:34:49.322795 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 09 13:34:49 crc kubenswrapper[4919]: I0109 13:34:49.324682 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 09 13:34:49 crc kubenswrapper[4919]: I0109 13:34:49.588607 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 09 13:34:49 crc kubenswrapper[4919]: I0109 13:34:49.710415 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 09 13:34:49 crc kubenswrapper[4919]: I0109 13:34:49.727302 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 09 13:34:49 crc kubenswrapper[4919]: I0109 13:34:49.760939 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.166065 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.225962 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.248967 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.268235 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.363090 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.377506 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.501842 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.602183 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.623524 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.671248 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.767104 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.780270 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.875856 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 09 13:34:50 crc kubenswrapper[4919]: I0109 13:34:50.925739 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.022886 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.029386 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.055756 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.134776 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.139259 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.142543 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.203192 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.222896 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.256199 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.258387 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.293621 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.357785 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.376626 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.434833 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.450390 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.480695 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.558593 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.573598 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.642344 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.666359 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.709426 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.714811 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.877903 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.891156 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.920362 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.944381 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.948947 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 09 13:34:51 crc kubenswrapper[4919]: I0109 13:34:51.973503 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.025397 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.036472 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.214973 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.256557 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.263166 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.593188 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.606748 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.628393 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.854133 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.902130 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.903029 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.904985 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.941719 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 09 13:34:52 crc kubenswrapper[4919]: I0109 13:34:52.965470 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.075349 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.150094 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.175676 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.233922 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.287026 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.343899 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.364493 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.375642 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.388160 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.437034 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.526823 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.589720 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.629316 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.774428 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.804699 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.903996 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.907714 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.950383 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 09 13:34:53 crc kubenswrapper[4919]: I0109 13:34:53.997182 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.003183 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.086679 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.088950 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.094334 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.099078 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.202696 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.328202 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.335130 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.361885 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.440125 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.491792 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.617501 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.647835 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.680478 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.692895 4919 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.779808 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.929042 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.934272 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 09 13:34:54 crc kubenswrapper[4919]: I0109 13:34:54.964690 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.005493 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.086634 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.177618 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.203833 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.203912 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.310830 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.403463 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.449432 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.482764 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.572714 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.606666 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.682417 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.806580 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 09 13:34:55 crc kubenswrapper[4919]: I0109 13:34:55.864238 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.002344 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.028413 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.058523 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.071250 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.153296 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.220926 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.330727 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.388882 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.399280 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.521311 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.637250 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.660411 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.668931 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.678156 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.717419 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.872312 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.874399 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.919197 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 13:34:56 crc kubenswrapper[4919]: I0109 13:34:56.991933 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.044017 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.213632 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.265054 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.401295 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.439606 4919 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.440172 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d1d64207abd9195331feab345729908ba8fd3a4370f7ea74b73f339c6b065729" gracePeriod=5 Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.489147 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.518784 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.711294 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.722599 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.878460 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58d4f98775-zqpjh"] Jan 09 13:34:57 crc kubenswrapper[4919]: E0109 13:34:57.879281 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.879471 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 09 13:34:57 crc kubenswrapper[4919]: E0109 13:34:57.879785 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" containerName="installer" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.879977 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" containerName="installer" Jan 09 13:34:57 crc kubenswrapper[4919]: E0109 13:34:57.880152 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" containerName="oauth-openshift" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.880354 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" containerName="oauth-openshift" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.880708 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d7b247c-486d-49ca-b26c-d25bca0471bc" containerName="installer" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.880898 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.881061 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cb2a4be-fcb8-47d4-b3f6-17c80b5a8403" containerName="oauth-openshift" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.882111 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.906669 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.906937 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.907051 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.907104 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.907120 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.908843 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.908915 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.909118 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.909276 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.909703 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.911745 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.915045 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.923546 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58d4f98775-zqpjh"] Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.924287 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.930419 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.934537 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 09 13:34:57 crc kubenswrapper[4919]: I0109 13:34:57.943053 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013204 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-audit-policies\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013266 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-template-error\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013291 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-session\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013308 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9l69\" (UniqueName: \"kubernetes.io/projected/890e39f1-d6f3-42e8-97d1-b65c3acd506e-kube-api-access-n9l69\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013330 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013346 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013493 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013734 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/890e39f1-d6f3-42e8-97d1-b65c3acd506e-audit-dir\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013798 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013889 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013954 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.013986 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-template-login\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.014016 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.014040 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115240 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/890e39f1-d6f3-42e8-97d1-b65c3acd506e-audit-dir\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115349 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115380 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115401 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115425 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-template-login\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115413 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/890e39f1-d6f3-42e8-97d1-b65c3acd506e-audit-dir\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115451 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115477 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115517 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-audit-policies\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115539 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-template-error\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115563 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-session\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115585 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9l69\" (UniqueName: \"kubernetes.io/projected/890e39f1-d6f3-42e8-97d1-b65c3acd506e-kube-api-access-n9l69\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115611 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115636 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.115660 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.117537 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.127277 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.127560 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.128310 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-audit-policies\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.128422 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.128680 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-template-error\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.129546 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.129983 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.131107 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.138135 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.140359 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-system-session\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.142582 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/890e39f1-d6f3-42e8-97d1-b65c3acd506e-v4-0-config-user-template-login\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.167437 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.168012 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9l69\" (UniqueName: \"kubernetes.io/projected/890e39f1-d6f3-42e8-97d1-b65c3acd506e-kube-api-access-n9l69\") pod \"oauth-openshift-58d4f98775-zqpjh\" (UID: \"890e39f1-d6f3-42e8-97d1-b65c3acd506e\") " pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.204153 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.218954 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.236115 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.247880 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.306921 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.347498 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.381236 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.620039 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.629161 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58d4f98775-zqpjh"] Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.639881 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.704826 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.853660 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.965788 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" event={"ID":"890e39f1-d6f3-42e8-97d1-b65c3acd506e","Type":"ContainerStarted","Data":"c111fc5c0928968a1767525b49bb17b326e323741fc68b3b2eb467fedee854bf"} Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.965839 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" event={"ID":"890e39f1-d6f3-42e8-97d1-b65c3acd506e","Type":"ContainerStarted","Data":"c34a6520aad621f1437566c55bb5d79a6c49c31df2c7f33edb00f27cec807320"} Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.966771 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:58 crc kubenswrapper[4919]: I0109 13:34:58.997411 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" podStartSLOduration=46.997391875 podStartE2EDuration="46.997391875s" podCreationTimestamp="2026-01-09 13:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:34:58.993244625 +0000 UTC m=+278.541084085" watchObservedRunningTime="2026-01-09 13:34:58.997391875 +0000 UTC m=+278.545231325" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.071414 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.091677 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.105383 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.111142 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.182863 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.267517 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.318377 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.322106 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58d4f98775-zqpjh" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.390864 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.453933 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.550957 4919 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.627409 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.727174 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 09 13:34:59 crc kubenswrapper[4919]: I0109 13:34:59.984704 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.009297 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.037693 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.113363 4919 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.121134 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.183154 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.250241 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.432032 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.463336 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.574159 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.598753 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.695921 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.717888 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.802377 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.873134 4919 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 09 13:35:00 crc kubenswrapper[4919]: I0109 13:35:00.946966 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 09 13:35:01 crc kubenswrapper[4919]: I0109 13:35:01.057425 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 09 13:35:01 crc kubenswrapper[4919]: I0109 13:35:01.149713 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 09 13:35:01 crc kubenswrapper[4919]: I0109 13:35:01.164353 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 09 13:35:01 crc kubenswrapper[4919]: I0109 13:35:01.252989 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 09 13:35:01 crc kubenswrapper[4919]: I0109 13:35:01.254330 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 09 13:35:01 crc kubenswrapper[4919]: I0109 13:35:01.437107 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 09 13:35:01 crc kubenswrapper[4919]: I0109 13:35:01.476041 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 09 13:35:01 crc kubenswrapper[4919]: I0109 13:35:01.486263 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 13:35:01 crc kubenswrapper[4919]: I0109 13:35:01.748453 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 09 13:35:01 crc kubenswrapper[4919]: I0109 13:35:01.796335 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 09 13:35:02 crc kubenswrapper[4919]: I0109 13:35:02.014365 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 09 13:35:02 crc kubenswrapper[4919]: I0109 13:35:02.121140 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 09 13:35:02 crc kubenswrapper[4919]: I0109 13:35:02.131759 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 09 13:35:02 crc kubenswrapper[4919]: I0109 13:35:02.134555 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 09 13:35:02 crc kubenswrapper[4919]: I0109 13:35:02.262059 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 09 13:35:02 crc kubenswrapper[4919]: I0109 13:35:02.264620 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 09 13:35:02 crc kubenswrapper[4919]: I0109 13:35:02.724014 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 09 13:35:02 crc kubenswrapper[4919]: I0109 13:35:02.745177 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 09 13:35:02 crc kubenswrapper[4919]: I0109 13:35:02.863909 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.011121 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.011281 4919 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d1d64207abd9195331feab345729908ba8fd3a4370f7ea74b73f339c6b065729" exitCode=137 Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.011382 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="056ed8b50fa0021009939709b2c5d5d60114d803ace9bbeb52931a76b711e5e7" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.068814 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.068983 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.154991 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.227744 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.227810 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.227864 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.227888 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.227979 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.228058 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.228133 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.228170 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.228238 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.228552 4919 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.228574 4919 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.228595 4919 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.228618 4919 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.239750 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.330338 4919 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.374655 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.439287 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 09 13:35:03 crc kubenswrapper[4919]: I0109 13:35:03.824771 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 09 13:35:04 crc kubenswrapper[4919]: I0109 13:35:04.019436 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 13:35:04 crc kubenswrapper[4919]: I0109 13:35:04.230734 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 09 13:35:04 crc kubenswrapper[4919]: I0109 13:35:04.766034 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 09 13:35:04 crc kubenswrapper[4919]: I0109 13:35:04.766556 4919 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 09 13:35:04 crc kubenswrapper[4919]: I0109 13:35:04.785661 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 09 13:35:04 crc kubenswrapper[4919]: I0109 13:35:04.785730 4919 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0f70ad77-a612-4ff8-84d3-6c8910a92136" Jan 09 13:35:04 crc kubenswrapper[4919]: I0109 13:35:04.791869 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 09 13:35:04 crc kubenswrapper[4919]: I0109 13:35:04.791933 4919 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0f70ad77-a612-4ff8-84d3-6c8910a92136" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.070501 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59d48cf488-dxjfh"] Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.071573 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" podUID="c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" containerName="controller-manager" containerID="cri-o://2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030" gracePeriod=30 Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.150984 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx"] Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.151228 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" podUID="d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7" containerName="route-controller-manager" containerID="cri-o://86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5" gracePeriod=30 Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.592579 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.599526 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.688899 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9sml\" (UniqueName: \"kubernetes.io/projected/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-kube-api-access-d9sml\") pod \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.689336 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-client-ca\") pod \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.689378 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-config\") pod \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.689440 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svfj2\" (UniqueName: \"kubernetes.io/projected/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-kube-api-access-svfj2\") pod \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.689468 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-proxy-ca-bundles\") pod \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.689501 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-serving-cert\") pod \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.689541 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-config\") pod \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\" (UID: \"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7\") " Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.689578 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-client-ca\") pod \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.689611 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-serving-cert\") pod \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\" (UID: \"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc\") " Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.691148 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" (UID: "c0f3545d-bbcd-42cc-82be-92ea04dc6bbc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.691203 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-config" (OuterVolumeSpecName: "config") pod "c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" (UID: "c0f3545d-bbcd-42cc-82be-92ea04dc6bbc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.691619 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-client-ca" (OuterVolumeSpecName: "client-ca") pod "d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7" (UID: "d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.692041 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-config" (OuterVolumeSpecName: "config") pod "d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7" (UID: "d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.692406 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-client-ca" (OuterVolumeSpecName: "client-ca") pod "c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" (UID: "c0f3545d-bbcd-42cc-82be-92ea04dc6bbc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.699111 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" (UID: "c0f3545d-bbcd-42cc-82be-92ea04dc6bbc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.704806 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7" (UID: "d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.704845 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-kube-api-access-svfj2" (OuterVolumeSpecName: "kube-api-access-svfj2") pod "c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" (UID: "c0f3545d-bbcd-42cc-82be-92ea04dc6bbc"). InnerVolumeSpecName "kube-api-access-svfj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.704880 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-kube-api-access-d9sml" (OuterVolumeSpecName: "kube-api-access-d9sml") pod "d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7" (UID: "d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7"). InnerVolumeSpecName "kube-api-access-d9sml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.791455 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9sml\" (UniqueName: \"kubernetes.io/projected/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-kube-api-access-d9sml\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.791498 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.791511 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.791525 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svfj2\" (UniqueName: \"kubernetes.io/projected/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-kube-api-access-svfj2\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.791536 4919 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.791546 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.791556 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.791567 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:06 crc kubenswrapper[4919]: I0109 13:35:06.791578 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.051107 4919 generic.go:334] "Generic (PLEG): container finished" podID="d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7" containerID="86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5" exitCode=0 Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.051178 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.051240 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" event={"ID":"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7","Type":"ContainerDied","Data":"86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5"} Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.051306 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx" event={"ID":"d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7","Type":"ContainerDied","Data":"2bf57b016ba4ad6173d7702030100e7cda337cc55b9b8d66baddd98c5cfb0b61"} Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.051339 4919 scope.go:117] "RemoveContainer" containerID="86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.056776 4919 generic.go:334] "Generic (PLEG): container finished" podID="c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" containerID="2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030" exitCode=0 Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.056826 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" event={"ID":"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc","Type":"ContainerDied","Data":"2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030"} Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.056867 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" event={"ID":"c0f3545d-bbcd-42cc-82be-92ea04dc6bbc","Type":"ContainerDied","Data":"110943cf4a9a5ba7c62178f34ee5fcf32c6e9eefba5cef10b0e6845413ca467e"} Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.056935 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59d48cf488-dxjfh" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.077775 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx"] Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.078925 4919 scope.go:117] "RemoveContainer" containerID="86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5" Jan 09 13:35:07 crc kubenswrapper[4919]: E0109 13:35:07.081380 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5\": container with ID starting with 86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5 not found: ID does not exist" containerID="86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.081449 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5"} err="failed to get container status \"86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5\": rpc error: code = NotFound desc = could not find container \"86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5\": container with ID starting with 86b6d4285e049794e69837e50f9b07aaa9dd1002aa43883443c11fb8a359f4d5 not found: ID does not exist" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.081492 4919 scope.go:117] "RemoveContainer" containerID="2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.088025 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcc8b7969-qxdrx"] Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.095592 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59d48cf488-dxjfh"] Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.102289 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59d48cf488-dxjfh"] Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.105672 4919 scope.go:117] "RemoveContainer" containerID="2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030" Jan 09 13:35:07 crc kubenswrapper[4919]: E0109 13:35:07.106962 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030\": container with ID starting with 2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030 not found: ID does not exist" containerID="2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.107016 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030"} err="failed to get container status \"2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030\": rpc error: code = NotFound desc = could not find container \"2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030\": container with ID starting with 2177a6a18e0948f43971e603256f05d06207f5e442de01b47ec28c8bbe579030 not found: ID does not exist" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.882346 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm"] Jan 09 13:35:07 crc kubenswrapper[4919]: E0109 13:35:07.882872 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" containerName="controller-manager" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.882909 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" containerName="controller-manager" Jan 09 13:35:07 crc kubenswrapper[4919]: E0109 13:35:07.882941 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7" containerName="route-controller-manager" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.882954 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7" containerName="route-controller-manager" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.883133 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" containerName="controller-manager" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.883159 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7" containerName="route-controller-manager" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.883966 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.886883 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.887272 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.887303 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.887679 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.889753 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.889862 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.893515 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d7d68875d-xtv77"] Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.894863 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.897044 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.897387 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.899396 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.899475 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.899613 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.899625 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.907681 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.909915 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d68875d-xtv77"] Jan 09 13:35:07 crc kubenswrapper[4919]: I0109 13:35:07.914875 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm"] Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.009832 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bfd34ad-959e-4acd-ade0-0b6005b680f6-serving-cert\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.009895 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54xkj\" (UniqueName: \"kubernetes.io/projected/dae65a5f-4bff-4961-8beb-52271c4eab3c-kube-api-access-54xkj\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.009934 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68cgg\" (UniqueName: \"kubernetes.io/projected/9bfd34ad-959e-4acd-ade0-0b6005b680f6-kube-api-access-68cgg\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.010029 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-client-ca\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.010054 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae65a5f-4bff-4961-8beb-52271c4eab3c-serving-cert\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.010082 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-proxy-ca-bundles\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.010100 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-config\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.010123 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-config\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.010143 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-client-ca\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.112114 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-config\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.112200 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-client-ca\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.112466 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bfd34ad-959e-4acd-ade0-0b6005b680f6-serving-cert\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.112668 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54xkj\" (UniqueName: \"kubernetes.io/projected/dae65a5f-4bff-4961-8beb-52271c4eab3c-kube-api-access-54xkj\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.112732 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68cgg\" (UniqueName: \"kubernetes.io/projected/9bfd34ad-959e-4acd-ade0-0b6005b680f6-kube-api-access-68cgg\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.112800 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-client-ca\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.112839 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae65a5f-4bff-4961-8beb-52271c4eab3c-serving-cert\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.112890 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-proxy-ca-bundles\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.112928 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-config\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.114460 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-client-ca\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.114807 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-config\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.115095 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-proxy-ca-bundles\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.115247 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-client-ca\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.115671 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-config\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.120578 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae65a5f-4bff-4961-8beb-52271c4eab3c-serving-cert\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.125090 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bfd34ad-959e-4acd-ade0-0b6005b680f6-serving-cert\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.136923 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68cgg\" (UniqueName: \"kubernetes.io/projected/9bfd34ad-959e-4acd-ade0-0b6005b680f6-kube-api-access-68cgg\") pod \"route-controller-manager-6fb495f6db-8v4vm\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.137635 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54xkj\" (UniqueName: \"kubernetes.io/projected/dae65a5f-4bff-4961-8beb-52271c4eab3c-kube-api-access-54xkj\") pod \"controller-manager-6d7d68875d-xtv77\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.201070 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.214446 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.562698 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d68875d-xtv77"] Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.696080 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm"] Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.762501 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0f3545d-bbcd-42cc-82be-92ea04dc6bbc" path="/var/lib/kubelet/pods/c0f3545d-bbcd-42cc-82be-92ea04dc6bbc/volumes" Jan 09 13:35:08 crc kubenswrapper[4919]: I0109 13:35:08.763369 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7" path="/var/lib/kubelet/pods/d9828d19-9e98-4c59-bfa9-0b0ffbb1c1c7/volumes" Jan 09 13:35:09 crc kubenswrapper[4919]: I0109 13:35:09.076824 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" event={"ID":"dae65a5f-4bff-4961-8beb-52271c4eab3c","Type":"ContainerStarted","Data":"bf03a11b3d7044660334d88ca01733a5f12a101872915d2970b1b27d363c963a"} Jan 09 13:35:09 crc kubenswrapper[4919]: I0109 13:35:09.077141 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" event={"ID":"dae65a5f-4bff-4961-8beb-52271c4eab3c","Type":"ContainerStarted","Data":"42c622d7a2f3a6d38c44cad03da2d0db56461e0895d039e23394a4b1f9097bc6"} Jan 09 13:35:09 crc kubenswrapper[4919]: I0109 13:35:09.078546 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:09 crc kubenswrapper[4919]: I0109 13:35:09.079965 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" event={"ID":"9bfd34ad-959e-4acd-ade0-0b6005b680f6","Type":"ContainerStarted","Data":"185364912ca25178afb7e876b661c08768e6b98a51826ce017a8522298b61521"} Jan 09 13:35:09 crc kubenswrapper[4919]: I0109 13:35:09.079996 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" event={"ID":"9bfd34ad-959e-4acd-ade0-0b6005b680f6","Type":"ContainerStarted","Data":"90e117e19ee5daecc7d25b2e5c7e5dc07f9df6a500e670f3c0a53a940817e71c"} Jan 09 13:35:09 crc kubenswrapper[4919]: I0109 13:35:09.080488 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:09 crc kubenswrapper[4919]: I0109 13:35:09.083672 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:09 crc kubenswrapper[4919]: I0109 13:35:09.097009 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" podStartSLOduration=3.09699031 podStartE2EDuration="3.09699031s" podCreationTimestamp="2026-01-09 13:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:35:09.095739877 +0000 UTC m=+288.643579327" watchObservedRunningTime="2026-01-09 13:35:09.09699031 +0000 UTC m=+288.644829760" Jan 09 13:35:09 crc kubenswrapper[4919]: I0109 13:35:09.133367 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" podStartSLOduration=3.133347806 podStartE2EDuration="3.133347806s" podCreationTimestamp="2026-01-09 13:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:35:09.13162762 +0000 UTC m=+288.679467070" watchObservedRunningTime="2026-01-09 13:35:09.133347806 +0000 UTC m=+288.681187256" Jan 09 13:35:09 crc kubenswrapper[4919]: I0109 13:35:09.346396 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:20 crc kubenswrapper[4919]: I0109 13:35:20.607129 4919 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 09 13:35:21 crc kubenswrapper[4919]: I0109 13:35:21.159041 4919 generic.go:334] "Generic (PLEG): container finished" podID="73f4afd2-691f-4749-b361-d99c9482a35b" containerID="e1a0c27c14757895b9f45718d0f9ff65a5755adbd5a99ea7fb4ae689244a039d" exitCode=0 Jan 09 13:35:21 crc kubenswrapper[4919]: I0109 13:35:21.159166 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66425" event={"ID":"73f4afd2-691f-4749-b361-d99c9482a35b","Type":"ContainerDied","Data":"e1a0c27c14757895b9f45718d0f9ff65a5755adbd5a99ea7fb4ae689244a039d"} Jan 09 13:35:21 crc kubenswrapper[4919]: I0109 13:35:21.160720 4919 scope.go:117] "RemoveContainer" containerID="e1a0c27c14757895b9f45718d0f9ff65a5755adbd5a99ea7fb4ae689244a039d" Jan 09 13:35:22 crc kubenswrapper[4919]: I0109 13:35:22.168253 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66425" event={"ID":"73f4afd2-691f-4749-b361-d99c9482a35b","Type":"ContainerStarted","Data":"8e4a7f4c5b308d4576d04e5760e2b30965b715f55a48bcc01dcef6902f526f93"} Jan 09 13:35:22 crc kubenswrapper[4919]: I0109 13:35:22.170591 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:35:22 crc kubenswrapper[4919]: I0109 13:35:22.172699 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.046799 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d68875d-xtv77"] Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.050081 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" podUID="dae65a5f-4bff-4961-8beb-52271c4eab3c" containerName="controller-manager" containerID="cri-o://bf03a11b3d7044660334d88ca01733a5f12a101872915d2970b1b27d363c963a" gracePeriod=30 Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.065632 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm"] Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.065869 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" podUID="9bfd34ad-959e-4acd-ade0-0b6005b680f6" containerName="route-controller-manager" containerID="cri-o://185364912ca25178afb7e876b661c08768e6b98a51826ce017a8522298b61521" gracePeriod=30 Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.200557 4919 generic.go:334] "Generic (PLEG): container finished" podID="9bfd34ad-959e-4acd-ade0-0b6005b680f6" containerID="185364912ca25178afb7e876b661c08768e6b98a51826ce017a8522298b61521" exitCode=0 Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.200631 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" event={"ID":"9bfd34ad-959e-4acd-ade0-0b6005b680f6","Type":"ContainerDied","Data":"185364912ca25178afb7e876b661c08768e6b98a51826ce017a8522298b61521"} Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.202460 4919 generic.go:334] "Generic (PLEG): container finished" podID="dae65a5f-4bff-4961-8beb-52271c4eab3c" containerID="bf03a11b3d7044660334d88ca01733a5f12a101872915d2970b1b27d363c963a" exitCode=0 Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.202496 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" event={"ID":"dae65a5f-4bff-4961-8beb-52271c4eab3c","Type":"ContainerDied","Data":"bf03a11b3d7044660334d88ca01733a5f12a101872915d2970b1b27d363c963a"} Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.582866 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.741920 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-client-ca\") pod \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.742527 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-config\") pod \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.742748 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68cgg\" (UniqueName: \"kubernetes.io/projected/9bfd34ad-959e-4acd-ade0-0b6005b680f6-kube-api-access-68cgg\") pod \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.742789 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bfd34ad-959e-4acd-ade0-0b6005b680f6-serving-cert\") pod \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\" (UID: \"9bfd34ad-959e-4acd-ade0-0b6005b680f6\") " Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.743321 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-client-ca" (OuterVolumeSpecName: "client-ca") pod "9bfd34ad-959e-4acd-ade0-0b6005b680f6" (UID: "9bfd34ad-959e-4acd-ade0-0b6005b680f6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.743373 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-config" (OuterVolumeSpecName: "config") pod "9bfd34ad-959e-4acd-ade0-0b6005b680f6" (UID: "9bfd34ad-959e-4acd-ade0-0b6005b680f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.751631 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bfd34ad-959e-4acd-ade0-0b6005b680f6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9bfd34ad-959e-4acd-ade0-0b6005b680f6" (UID: "9bfd34ad-959e-4acd-ade0-0b6005b680f6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.751703 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bfd34ad-959e-4acd-ade0-0b6005b680f6-kube-api-access-68cgg" (OuterVolumeSpecName: "kube-api-access-68cgg") pod "9bfd34ad-959e-4acd-ade0-0b6005b680f6" (UID: "9bfd34ad-959e-4acd-ade0-0b6005b680f6"). InnerVolumeSpecName "kube-api-access-68cgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.844320 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68cgg\" (UniqueName: \"kubernetes.io/projected/9bfd34ad-959e-4acd-ade0-0b6005b680f6-kube-api-access-68cgg\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.844484 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bfd34ad-959e-4acd-ade0-0b6005b680f6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.844507 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:26 crc kubenswrapper[4919]: I0109 13:35:26.844525 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bfd34ad-959e-4acd-ade0-0b6005b680f6-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.128154 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.212012 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" event={"ID":"9bfd34ad-959e-4acd-ade0-0b6005b680f6","Type":"ContainerDied","Data":"90e117e19ee5daecc7d25b2e5c7e5dc07f9df6a500e670f3c0a53a940817e71c"} Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.212070 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.212124 4919 scope.go:117] "RemoveContainer" containerID="185364912ca25178afb7e876b661c08768e6b98a51826ce017a8522298b61521" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.215110 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" event={"ID":"dae65a5f-4bff-4961-8beb-52271c4eab3c","Type":"ContainerDied","Data":"42c622d7a2f3a6d38c44cad03da2d0db56461e0895d039e23394a4b1f9097bc6"} Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.215263 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7d68875d-xtv77" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.247343 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm"] Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.247419 4919 scope.go:117] "RemoveContainer" containerID="bf03a11b3d7044660334d88ca01733a5f12a101872915d2970b1b27d363c963a" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.249754 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-config\") pod \"dae65a5f-4bff-4961-8beb-52271c4eab3c\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.249840 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54xkj\" (UniqueName: \"kubernetes.io/projected/dae65a5f-4bff-4961-8beb-52271c4eab3c-kube-api-access-54xkj\") pod \"dae65a5f-4bff-4961-8beb-52271c4eab3c\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.249969 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-proxy-ca-bundles\") pod \"dae65a5f-4bff-4961-8beb-52271c4eab3c\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.250044 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae65a5f-4bff-4961-8beb-52271c4eab3c-serving-cert\") pod \"dae65a5f-4bff-4961-8beb-52271c4eab3c\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.250087 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-client-ca\") pod \"dae65a5f-4bff-4961-8beb-52271c4eab3c\" (UID: \"dae65a5f-4bff-4961-8beb-52271c4eab3c\") " Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.250157 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb495f6db-8v4vm"] Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.250877 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-config" (OuterVolumeSpecName: "config") pod "dae65a5f-4bff-4961-8beb-52271c4eab3c" (UID: "dae65a5f-4bff-4961-8beb-52271c4eab3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.251165 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-client-ca" (OuterVolumeSpecName: "client-ca") pod "dae65a5f-4bff-4961-8beb-52271c4eab3c" (UID: "dae65a5f-4bff-4961-8beb-52271c4eab3c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.251331 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dae65a5f-4bff-4961-8beb-52271c4eab3c" (UID: "dae65a5f-4bff-4961-8beb-52271c4eab3c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.255766 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae65a5f-4bff-4961-8beb-52271c4eab3c-kube-api-access-54xkj" (OuterVolumeSpecName: "kube-api-access-54xkj") pod "dae65a5f-4bff-4961-8beb-52271c4eab3c" (UID: "dae65a5f-4bff-4961-8beb-52271c4eab3c"). InnerVolumeSpecName "kube-api-access-54xkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.257077 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae65a5f-4bff-4961-8beb-52271c4eab3c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dae65a5f-4bff-4961-8beb-52271c4eab3c" (UID: "dae65a5f-4bff-4961-8beb-52271c4eab3c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.351971 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae65a5f-4bff-4961-8beb-52271c4eab3c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.352013 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.352025 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.352035 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54xkj\" (UniqueName: \"kubernetes.io/projected/dae65a5f-4bff-4961-8beb-52271c4eab3c-kube-api-access-54xkj\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.352046 4919 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dae65a5f-4bff-4961-8beb-52271c4eab3c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.563589 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d68875d-xtv77"] Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.569958 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d68875d-xtv77"] Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.900676 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d8dccc797-qfx65"] Jan 09 13:35:27 crc kubenswrapper[4919]: E0109 13:35:27.900992 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfd34ad-959e-4acd-ade0-0b6005b680f6" containerName="route-controller-manager" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.901008 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfd34ad-959e-4acd-ade0-0b6005b680f6" containerName="route-controller-manager" Jan 09 13:35:27 crc kubenswrapper[4919]: E0109 13:35:27.901029 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae65a5f-4bff-4961-8beb-52271c4eab3c" containerName="controller-manager" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.901037 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae65a5f-4bff-4961-8beb-52271c4eab3c" containerName="controller-manager" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.901156 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae65a5f-4bff-4961-8beb-52271c4eab3c" containerName="controller-manager" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.901177 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bfd34ad-959e-4acd-ade0-0b6005b680f6" containerName="route-controller-manager" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.901675 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.904858 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.905137 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.905400 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.905652 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.905898 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.907304 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.914707 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc"] Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.915147 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.916150 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.919568 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.919724 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.919763 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.919822 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.919907 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.920323 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.926947 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d8dccc797-qfx65"] Jan 09 13:35:27 crc kubenswrapper[4919]: I0109 13:35:27.939739 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc"] Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.062642 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-client-ca\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.062729 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2r9\" (UniqueName: \"kubernetes.io/projected/0de45222-183b-4c44-b26f-98238e65b4d5-kube-api-access-qt2r9\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.062786 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-config\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.062827 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-client-ca\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.062913 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0de45222-183b-4c44-b26f-98238e65b4d5-serving-cert\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.062964 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03976141-148a-45d4-a7a0-acdd110d1e11-serving-cert\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.063013 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsqs\" (UniqueName: \"kubernetes.io/projected/03976141-148a-45d4-a7a0-acdd110d1e11-kube-api-access-5xsqs\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.063062 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-proxy-ca-bundles\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.063109 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-config\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.163704 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt2r9\" (UniqueName: \"kubernetes.io/projected/0de45222-183b-4c44-b26f-98238e65b4d5-kube-api-access-qt2r9\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.163759 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-config\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.163785 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-client-ca\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.163807 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0de45222-183b-4c44-b26f-98238e65b4d5-serving-cert\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.163834 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03976141-148a-45d4-a7a0-acdd110d1e11-serving-cert\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.163856 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xsqs\" (UniqueName: \"kubernetes.io/projected/03976141-148a-45d4-a7a0-acdd110d1e11-kube-api-access-5xsqs\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.163882 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-proxy-ca-bundles\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.163909 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-config\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.163959 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-client-ca\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.165032 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-client-ca\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.166258 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-proxy-ca-bundles\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.166447 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-config\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.166808 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-client-ca\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.167801 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-config\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.173186 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0de45222-183b-4c44-b26f-98238e65b4d5-serving-cert\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.177998 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03976141-148a-45d4-a7a0-acdd110d1e11-serving-cert\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.189721 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt2r9\" (UniqueName: \"kubernetes.io/projected/0de45222-183b-4c44-b26f-98238e65b4d5-kube-api-access-qt2r9\") pod \"route-controller-manager-7c8bb87477-5xswc\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.206877 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xsqs\" (UniqueName: \"kubernetes.io/projected/03976141-148a-45d4-a7a0-acdd110d1e11-kube-api-access-5xsqs\") pod \"controller-manager-d8dccc797-qfx65\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.235708 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.250829 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.517264 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc"] Jan 09 13:35:28 crc kubenswrapper[4919]: W0109 13:35:28.532025 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0de45222_183b_4c44_b26f_98238e65b4d5.slice/crio-39ca29dde4d9b254dcd331d17bd3e76d87bb24b3238a4c906500f3b630dbd707 WatchSource:0}: Error finding container 39ca29dde4d9b254dcd331d17bd3e76d87bb24b3238a4c906500f3b630dbd707: Status 404 returned error can't find the container with id 39ca29dde4d9b254dcd331d17bd3e76d87bb24b3238a4c906500f3b630dbd707 Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.666193 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d8dccc797-qfx65"] Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.758931 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bfd34ad-959e-4acd-ade0-0b6005b680f6" path="/var/lib/kubelet/pods/9bfd34ad-959e-4acd-ade0-0b6005b680f6/volumes" Jan 09 13:35:28 crc kubenswrapper[4919]: I0109 13:35:28.759737 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae65a5f-4bff-4961-8beb-52271c4eab3c" path="/var/lib/kubelet/pods/dae65a5f-4bff-4961-8beb-52271c4eab3c/volumes" Jan 09 13:35:29 crc kubenswrapper[4919]: I0109 13:35:29.233617 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" event={"ID":"03976141-148a-45d4-a7a0-acdd110d1e11","Type":"ContainerStarted","Data":"45cbfde240935359fe78fd0c10e926dea75c3d73d6afe650ced3f387a066a32a"} Jan 09 13:35:29 crc kubenswrapper[4919]: I0109 13:35:29.234045 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" event={"ID":"03976141-148a-45d4-a7a0-acdd110d1e11","Type":"ContainerStarted","Data":"b6d3662fa19845800a152ecdeea3ea7941c5429e7066c355a98ada35eb7441c5"} Jan 09 13:35:29 crc kubenswrapper[4919]: I0109 13:35:29.234063 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:29 crc kubenswrapper[4919]: I0109 13:35:29.235601 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" event={"ID":"0de45222-183b-4c44-b26f-98238e65b4d5","Type":"ContainerStarted","Data":"cf6aefc7e41298d621d204ac30af71319ec9db84476b17bff6ed734fcfafde69"} Jan 09 13:35:29 crc kubenswrapper[4919]: I0109 13:35:29.235630 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" event={"ID":"0de45222-183b-4c44-b26f-98238e65b4d5","Type":"ContainerStarted","Data":"39ca29dde4d9b254dcd331d17bd3e76d87bb24b3238a4c906500f3b630dbd707"} Jan 09 13:35:29 crc kubenswrapper[4919]: I0109 13:35:29.235902 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:29 crc kubenswrapper[4919]: I0109 13:35:29.242918 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:35:29 crc kubenswrapper[4919]: I0109 13:35:29.245478 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:35:29 crc kubenswrapper[4919]: I0109 13:35:29.255924 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" podStartSLOduration=3.255897103 podStartE2EDuration="3.255897103s" podCreationTimestamp="2026-01-09 13:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:35:29.252724939 +0000 UTC m=+308.800564409" watchObservedRunningTime="2026-01-09 13:35:29.255897103 +0000 UTC m=+308.803736563" Jan 09 13:35:29 crc kubenswrapper[4919]: I0109 13:35:29.273428 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" podStartSLOduration=3.273411078 podStartE2EDuration="3.273411078s" podCreationTimestamp="2026-01-09 13:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:35:29.272802412 +0000 UTC m=+308.820641862" watchObservedRunningTime="2026-01-09 13:35:29.273411078 +0000 UTC m=+308.821250528" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.046035 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d8dccc797-qfx65"] Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.048415 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" podUID="03976141-148a-45d4-a7a0-acdd110d1e11" containerName="controller-manager" containerID="cri-o://45cbfde240935359fe78fd0c10e926dea75c3d73d6afe650ced3f387a066a32a" gracePeriod=30 Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.062720 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc"] Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.063815 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" podUID="0de45222-183b-4c44-b26f-98238e65b4d5" containerName="route-controller-manager" containerID="cri-o://cf6aefc7e41298d621d204ac30af71319ec9db84476b17bff6ed734fcfafde69" gracePeriod=30 Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.494400 4919 generic.go:334] "Generic (PLEG): container finished" podID="03976141-148a-45d4-a7a0-acdd110d1e11" containerID="45cbfde240935359fe78fd0c10e926dea75c3d73d6afe650ced3f387a066a32a" exitCode=0 Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.494507 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" event={"ID":"03976141-148a-45d4-a7a0-acdd110d1e11","Type":"ContainerDied","Data":"45cbfde240935359fe78fd0c10e926dea75c3d73d6afe650ced3f387a066a32a"} Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.494596 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" event={"ID":"03976141-148a-45d4-a7a0-acdd110d1e11","Type":"ContainerDied","Data":"b6d3662fa19845800a152ecdeea3ea7941c5429e7066c355a98ada35eb7441c5"} Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.494615 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6d3662fa19845800a152ecdeea3ea7941c5429e7066c355a98ada35eb7441c5" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.496141 4919 generic.go:334] "Generic (PLEG): container finished" podID="0de45222-183b-4c44-b26f-98238e65b4d5" containerID="cf6aefc7e41298d621d204ac30af71319ec9db84476b17bff6ed734fcfafde69" exitCode=0 Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.496190 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" event={"ID":"0de45222-183b-4c44-b26f-98238e65b4d5","Type":"ContainerDied","Data":"cf6aefc7e41298d621d204ac30af71319ec9db84476b17bff6ed734fcfafde69"} Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.496237 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" event={"ID":"0de45222-183b-4c44-b26f-98238e65b4d5","Type":"ContainerDied","Data":"39ca29dde4d9b254dcd331d17bd3e76d87bb24b3238a4c906500f3b630dbd707"} Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.496249 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39ca29dde4d9b254dcd331d17bd3e76d87bb24b3238a4c906500f3b630dbd707" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.513995 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.519505 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.621430 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-client-ca\") pod \"0de45222-183b-4c44-b26f-98238e65b4d5\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.621859 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-client-ca\") pod \"03976141-148a-45d4-a7a0-acdd110d1e11\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.621885 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0de45222-183b-4c44-b26f-98238e65b4d5-serving-cert\") pod \"0de45222-183b-4c44-b26f-98238e65b4d5\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.621903 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-proxy-ca-bundles\") pod \"03976141-148a-45d4-a7a0-acdd110d1e11\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.621947 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xsqs\" (UniqueName: \"kubernetes.io/projected/03976141-148a-45d4-a7a0-acdd110d1e11-kube-api-access-5xsqs\") pod \"03976141-148a-45d4-a7a0-acdd110d1e11\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.621979 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03976141-148a-45d4-a7a0-acdd110d1e11-serving-cert\") pod \"03976141-148a-45d4-a7a0-acdd110d1e11\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.622054 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt2r9\" (UniqueName: \"kubernetes.io/projected/0de45222-183b-4c44-b26f-98238e65b4d5-kube-api-access-qt2r9\") pod \"0de45222-183b-4c44-b26f-98238e65b4d5\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.622075 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-config\") pod \"0de45222-183b-4c44-b26f-98238e65b4d5\" (UID: \"0de45222-183b-4c44-b26f-98238e65b4d5\") " Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.622128 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-config\") pod \"03976141-148a-45d4-a7a0-acdd110d1e11\" (UID: \"03976141-148a-45d4-a7a0-acdd110d1e11\") " Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.622344 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-client-ca" (OuterVolumeSpecName: "client-ca") pod "0de45222-183b-4c44-b26f-98238e65b4d5" (UID: "0de45222-183b-4c44-b26f-98238e65b4d5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.622826 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-client-ca" (OuterVolumeSpecName: "client-ca") pod "03976141-148a-45d4-a7a0-acdd110d1e11" (UID: "03976141-148a-45d4-a7a0-acdd110d1e11"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.622857 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-config" (OuterVolumeSpecName: "config") pod "03976141-148a-45d4-a7a0-acdd110d1e11" (UID: "03976141-148a-45d4-a7a0-acdd110d1e11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.623266 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "03976141-148a-45d4-a7a0-acdd110d1e11" (UID: "03976141-148a-45d4-a7a0-acdd110d1e11"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.623750 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-config" (OuterVolumeSpecName: "config") pod "0de45222-183b-4c44-b26f-98238e65b4d5" (UID: "0de45222-183b-4c44-b26f-98238e65b4d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.629618 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de45222-183b-4c44-b26f-98238e65b4d5-kube-api-access-qt2r9" (OuterVolumeSpecName: "kube-api-access-qt2r9") pod "0de45222-183b-4c44-b26f-98238e65b4d5" (UID: "0de45222-183b-4c44-b26f-98238e65b4d5"). InnerVolumeSpecName "kube-api-access-qt2r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.634366 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03976141-148a-45d4-a7a0-acdd110d1e11-kube-api-access-5xsqs" (OuterVolumeSpecName: "kube-api-access-5xsqs") pod "03976141-148a-45d4-a7a0-acdd110d1e11" (UID: "03976141-148a-45d4-a7a0-acdd110d1e11"). InnerVolumeSpecName "kube-api-access-5xsqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.634378 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de45222-183b-4c44-b26f-98238e65b4d5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0de45222-183b-4c44-b26f-98238e65b4d5" (UID: "0de45222-183b-4c44-b26f-98238e65b4d5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.641355 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03976141-148a-45d4-a7a0-acdd110d1e11-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "03976141-148a-45d4-a7a0-acdd110d1e11" (UID: "03976141-148a-45d4-a7a0-acdd110d1e11"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.723532 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt2r9\" (UniqueName: \"kubernetes.io/projected/0de45222-183b-4c44-b26f-98238e65b4d5-kube-api-access-qt2r9\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.723577 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.723599 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.723615 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0de45222-183b-4c44-b26f-98238e65b4d5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.723632 4919 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.723646 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0de45222-183b-4c44-b26f-98238e65b4d5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.723661 4919 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03976141-148a-45d4-a7a0-acdd110d1e11-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.723676 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xsqs\" (UniqueName: \"kubernetes.io/projected/03976141-148a-45d4-a7a0-acdd110d1e11-kube-api-access-5xsqs\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.723691 4919 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03976141-148a-45d4-a7a0-acdd110d1e11-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.826165 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-78lhl"] Jan 09 13:36:06 crc kubenswrapper[4919]: E0109 13:36:06.826420 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03976141-148a-45d4-a7a0-acdd110d1e11" containerName="controller-manager" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.826458 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="03976141-148a-45d4-a7a0-acdd110d1e11" containerName="controller-manager" Jan 09 13:36:06 crc kubenswrapper[4919]: E0109 13:36:06.826478 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de45222-183b-4c44-b26f-98238e65b4d5" containerName="route-controller-manager" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.826484 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de45222-183b-4c44-b26f-98238e65b4d5" containerName="route-controller-manager" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.826624 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="03976141-148a-45d4-a7a0-acdd110d1e11" containerName="controller-manager" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.826635 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de45222-183b-4c44-b26f-98238e65b4d5" containerName="route-controller-manager" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.827100 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.850640 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-78lhl"] Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.927712 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7781cbd1-cf5e-45fb-a3a4-83578b934acb-trusted-ca\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.928063 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7781cbd1-cf5e-45fb-a3a4-83578b934acb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.928165 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7781cbd1-cf5e-45fb-a3a4-83578b934acb-bound-sa-token\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.928317 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7781cbd1-cf5e-45fb-a3a4-83578b934acb-registry-tls\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.928426 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7781cbd1-cf5e-45fb-a3a4-83578b934acb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.928572 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9np9\" (UniqueName: \"kubernetes.io/projected/7781cbd1-cf5e-45fb-a3a4-83578b934acb-kube-api-access-x9np9\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.928619 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7781cbd1-cf5e-45fb-a3a4-83578b934acb-registry-certificates\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.928668 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:06 crc kubenswrapper[4919]: I0109 13:36:06.950720 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.030460 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7781cbd1-cf5e-45fb-a3a4-83578b934acb-trusted-ca\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.030538 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7781cbd1-cf5e-45fb-a3a4-83578b934acb-bound-sa-token\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.030565 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7781cbd1-cf5e-45fb-a3a4-83578b934acb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.030585 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7781cbd1-cf5e-45fb-a3a4-83578b934acb-registry-tls\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.030610 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7781cbd1-cf5e-45fb-a3a4-83578b934acb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.030651 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9np9\" (UniqueName: \"kubernetes.io/projected/7781cbd1-cf5e-45fb-a3a4-83578b934acb-kube-api-access-x9np9\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.030676 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7781cbd1-cf5e-45fb-a3a4-83578b934acb-registry-certificates\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.031781 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7781cbd1-cf5e-45fb-a3a4-83578b934acb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.032191 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7781cbd1-cf5e-45fb-a3a4-83578b934acb-registry-certificates\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.032558 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7781cbd1-cf5e-45fb-a3a4-83578b934acb-trusted-ca\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.035879 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7781cbd1-cf5e-45fb-a3a4-83578b934acb-registry-tls\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.036713 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7781cbd1-cf5e-45fb-a3a4-83578b934acb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.052263 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7781cbd1-cf5e-45fb-a3a4-83578b934acb-bound-sa-token\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.053286 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9np9\" (UniqueName: \"kubernetes.io/projected/7781cbd1-cf5e-45fb-a3a4-83578b934acb-kube-api-access-x9np9\") pod \"image-registry-66df7c8f76-78lhl\" (UID: \"7781cbd1-cf5e-45fb-a3a4-83578b934acb\") " pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.143287 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.500876 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.500943 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d8dccc797-qfx65" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.524251 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc"] Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.529189 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8bb87477-5xswc"] Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.537658 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d8dccc797-qfx65"] Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.541010 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d8dccc797-qfx65"] Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.568902 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-78lhl"] Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.925709 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d7d68875d-h29n8"] Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.926663 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.929059 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.929314 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.929555 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.929916 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.930617 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.930673 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq"] Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.930997 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.931468 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.936354 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.936747 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.937011 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.937278 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.937509 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.938522 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.940286 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.948737 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq"] Jan 09 13:36:07 crc kubenswrapper[4919]: I0109 13:36:07.962116 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d68875d-h29n8"] Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.045121 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f5e88b-28aa-4cac-acd5-1042b130c214-serving-cert\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.045430 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrv4\" (UniqueName: \"kubernetes.io/projected/c7f5e88b-28aa-4cac-acd5-1042b130c214-kube-api-access-njrv4\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.045540 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bbcfb59-5838-4195-bf72-d395b3750b52-client-ca\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.045644 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f5e88b-28aa-4cac-acd5-1042b130c214-config\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.045746 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bbcfb59-5838-4195-bf72-d395b3750b52-proxy-ca-bundles\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.045829 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jbn\" (UniqueName: \"kubernetes.io/projected/5bbcfb59-5838-4195-bf72-d395b3750b52-kube-api-access-h8jbn\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.045905 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbcfb59-5838-4195-bf72-d395b3750b52-config\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.045979 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7f5e88b-28aa-4cac-acd5-1042b130c214-client-ca\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.046072 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bbcfb59-5838-4195-bf72-d395b3750b52-serving-cert\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.147417 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bbcfb59-5838-4195-bf72-d395b3750b52-client-ca\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.147466 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f5e88b-28aa-4cac-acd5-1042b130c214-config\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.147493 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bbcfb59-5838-4195-bf72-d395b3750b52-proxy-ca-bundles\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.147514 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jbn\" (UniqueName: \"kubernetes.io/projected/5bbcfb59-5838-4195-bf72-d395b3750b52-kube-api-access-h8jbn\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.147536 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbcfb59-5838-4195-bf72-d395b3750b52-config\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.147556 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7f5e88b-28aa-4cac-acd5-1042b130c214-client-ca\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.147576 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bbcfb59-5838-4195-bf72-d395b3750b52-serving-cert\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.147613 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f5e88b-28aa-4cac-acd5-1042b130c214-serving-cert\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.147633 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njrv4\" (UniqueName: \"kubernetes.io/projected/c7f5e88b-28aa-4cac-acd5-1042b130c214-kube-api-access-njrv4\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.148389 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bbcfb59-5838-4195-bf72-d395b3750b52-client-ca\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.148822 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bbcfb59-5838-4195-bf72-d395b3750b52-proxy-ca-bundles\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.149026 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7f5e88b-28aa-4cac-acd5-1042b130c214-client-ca\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.149332 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f5e88b-28aa-4cac-acd5-1042b130c214-config\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.150279 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbcfb59-5838-4195-bf72-d395b3750b52-config\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.157360 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f5e88b-28aa-4cac-acd5-1042b130c214-serving-cert\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.167620 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njrv4\" (UniqueName: \"kubernetes.io/projected/c7f5e88b-28aa-4cac-acd5-1042b130c214-kube-api-access-njrv4\") pod \"route-controller-manager-6fb495f6db-fm5zq\" (UID: \"c7f5e88b-28aa-4cac-acd5-1042b130c214\") " pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.172846 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bbcfb59-5838-4195-bf72-d395b3750b52-serving-cert\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.173596 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jbn\" (UniqueName: \"kubernetes.io/projected/5bbcfb59-5838-4195-bf72-d395b3750b52-kube-api-access-h8jbn\") pod \"controller-manager-6d7d68875d-h29n8\" (UID: \"5bbcfb59-5838-4195-bf72-d395b3750b52\") " pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.255890 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.268241 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.455990 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d68875d-h29n8"] Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.508415 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" event={"ID":"7781cbd1-cf5e-45fb-a3a4-83578b934acb","Type":"ContainerStarted","Data":"7b76a7cf273cc957385b11303a97e6e30dab6f0e716c8684acbf26bc1c219993"} Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.508480 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" event={"ID":"7781cbd1-cf5e-45fb-a3a4-83578b934acb","Type":"ContainerStarted","Data":"0103c2ac6745c029fe560773b6685046bd36a3d5158172812e557a282463cfb4"} Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.508518 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.509654 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" event={"ID":"5bbcfb59-5838-4195-bf72-d395b3750b52","Type":"ContainerStarted","Data":"a90ff4829a0b628bb03337c965307d28549cc57dc680e8f3ec2b675095aba8b4"} Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.726362 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" podStartSLOduration=2.726337391 podStartE2EDuration="2.726337391s" podCreationTimestamp="2026-01-09 13:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:36:08.52599097 +0000 UTC m=+348.073830490" watchObservedRunningTime="2026-01-09 13:36:08.726337391 +0000 UTC m=+348.274176861" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.730179 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq"] Jan 09 13:36:08 crc kubenswrapper[4919]: W0109 13:36:08.736675 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7f5e88b_28aa_4cac_acd5_1042b130c214.slice/crio-8ddb1409287a677a7ee38af6fd1106fdb737bc254bfbc79288a1bdeb61c31ccd WatchSource:0}: Error finding container 8ddb1409287a677a7ee38af6fd1106fdb737bc254bfbc79288a1bdeb61c31ccd: Status 404 returned error can't find the container with id 8ddb1409287a677a7ee38af6fd1106fdb737bc254bfbc79288a1bdeb61c31ccd Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.760445 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03976141-148a-45d4-a7a0-acdd110d1e11" path="/var/lib/kubelet/pods/03976141-148a-45d4-a7a0-acdd110d1e11/volumes" Jan 09 13:36:08 crc kubenswrapper[4919]: I0109 13:36:08.762109 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de45222-183b-4c44-b26f-98238e65b4d5" path="/var/lib/kubelet/pods/0de45222-183b-4c44-b26f-98238e65b4d5/volumes" Jan 09 13:36:09 crc kubenswrapper[4919]: I0109 13:36:09.516322 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" event={"ID":"c7f5e88b-28aa-4cac-acd5-1042b130c214","Type":"ContainerStarted","Data":"4021eaf8f1704d49e8c4d5048ac7a36757cee946e24370d089768f7b36bca873"} Jan 09 13:36:09 crc kubenswrapper[4919]: I0109 13:36:09.518020 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:09 crc kubenswrapper[4919]: I0109 13:36:09.518337 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" event={"ID":"c7f5e88b-28aa-4cac-acd5-1042b130c214","Type":"ContainerStarted","Data":"8ddb1409287a677a7ee38af6fd1106fdb737bc254bfbc79288a1bdeb61c31ccd"} Jan 09 13:36:09 crc kubenswrapper[4919]: I0109 13:36:09.519731 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" event={"ID":"5bbcfb59-5838-4195-bf72-d395b3750b52","Type":"ContainerStarted","Data":"5d0f58afb1028d06f9d0e286da6435444ff68b78c946e7cc084dcfd5bb0c4771"} Jan 09 13:36:09 crc kubenswrapper[4919]: I0109 13:36:09.519794 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:09 crc kubenswrapper[4919]: I0109 13:36:09.522396 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" Jan 09 13:36:09 crc kubenswrapper[4919]: I0109 13:36:09.524225 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" Jan 09 13:36:09 crc kubenswrapper[4919]: I0109 13:36:09.538245 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fb495f6db-fm5zq" podStartSLOduration=3.5381915299999998 podStartE2EDuration="3.53819153s" podCreationTimestamp="2026-01-09 13:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:36:09.532108297 +0000 UTC m=+349.079947747" watchObservedRunningTime="2026-01-09 13:36:09.53819153 +0000 UTC m=+349.086031000" Jan 09 13:36:09 crc kubenswrapper[4919]: I0109 13:36:09.559636 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d7d68875d-h29n8" podStartSLOduration=3.559602638 podStartE2EDuration="3.559602638s" podCreationTimestamp="2026-01-09 13:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:36:09.558009448 +0000 UTC m=+349.105848928" watchObservedRunningTime="2026-01-09 13:36:09.559602638 +0000 UTC m=+349.107442088" Jan 09 13:36:21 crc kubenswrapper[4919]: I0109 13:36:21.246539 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:36:21 crc kubenswrapper[4919]: I0109 13:36:21.248476 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.148918 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tf6wk"] Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.149712 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tf6wk" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" containerName="registry-server" containerID="cri-o://bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7" gracePeriod=30 Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.155702 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvr9v"] Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.161056 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xvr9v" podUID="691c6d86-b150-4576-872d-004862dcbd22" containerName="registry-server" containerID="cri-o://5aa0e405a0e9a962dc34bb62238a982e277577db22fe26419780420e7db19630" gracePeriod=30 Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.164654 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66425"] Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.166263 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-66425" podUID="73f4afd2-691f-4749-b361-d99c9482a35b" containerName="marketplace-operator" containerID="cri-o://8e4a7f4c5b308d4576d04e5760e2b30965b715f55a48bcc01dcef6902f526f93" gracePeriod=30 Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.173899 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj7bg"] Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.174192 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bj7bg" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerName="registry-server" containerID="cri-o://ce894f8334796fdbd85d158a44a057da3822ed76d9d4f803b57cd61d80aa3072" gracePeriod=30 Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.183752 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qx45q"] Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.184061 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qx45q" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerName="registry-server" containerID="cri-o://20569914b88b8dffb208f8d743645f26ec49cb7f5ad5daf956087ce43e69dc76" gracePeriod=30 Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.198251 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-46q7s"] Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.199470 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.213167 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-46q7s"] Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.298316 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1290e54-d4c8-4911-a121-762fffa39a66-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-46q7s\" (UID: \"c1290e54-d4c8-4911-a121-762fffa39a66\") " pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.298437 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c1290e54-d4c8-4911-a121-762fffa39a66-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-46q7s\" (UID: \"c1290e54-d4c8-4911-a121-762fffa39a66\") " pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.298467 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs2r6\" (UniqueName: \"kubernetes.io/projected/c1290e54-d4c8-4911-a121-762fffa39a66-kube-api-access-rs2r6\") pod \"marketplace-operator-79b997595-46q7s\" (UID: \"c1290e54-d4c8-4911-a121-762fffa39a66\") " pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.399294 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs2r6\" (UniqueName: \"kubernetes.io/projected/c1290e54-d4c8-4911-a121-762fffa39a66-kube-api-access-rs2r6\") pod \"marketplace-operator-79b997595-46q7s\" (UID: \"c1290e54-d4c8-4911-a121-762fffa39a66\") " pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.399867 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1290e54-d4c8-4911-a121-762fffa39a66-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-46q7s\" (UID: \"c1290e54-d4c8-4911-a121-762fffa39a66\") " pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.400063 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c1290e54-d4c8-4911-a121-762fffa39a66-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-46q7s\" (UID: \"c1290e54-d4c8-4911-a121-762fffa39a66\") " pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.401429 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1290e54-d4c8-4911-a121-762fffa39a66-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-46q7s\" (UID: \"c1290e54-d4c8-4911-a121-762fffa39a66\") " pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.416846 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c1290e54-d4c8-4911-a121-762fffa39a66-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-46q7s\" (UID: \"c1290e54-d4c8-4911-a121-762fffa39a66\") " pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.419464 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs2r6\" (UniqueName: \"kubernetes.io/projected/c1290e54-d4c8-4911-a121-762fffa39a66-kube-api-access-rs2r6\") pod \"marketplace-operator-79b997595-46q7s\" (UID: \"c1290e54-d4c8-4911-a121-762fffa39a66\") " pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.511193 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.616573 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.634241 4919 generic.go:334] "Generic (PLEG): container finished" podID="691c6d86-b150-4576-872d-004862dcbd22" containerID="5aa0e405a0e9a962dc34bb62238a982e277577db22fe26419780420e7db19630" exitCode=0 Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.634277 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvr9v" event={"ID":"691c6d86-b150-4576-872d-004862dcbd22","Type":"ContainerDied","Data":"5aa0e405a0e9a962dc34bb62238a982e277577db22fe26419780420e7db19630"} Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.661560 4919 generic.go:334] "Generic (PLEG): container finished" podID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerID="ce894f8334796fdbd85d158a44a057da3822ed76d9d4f803b57cd61d80aa3072" exitCode=0 Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.661633 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj7bg" event={"ID":"3bdb482c-0d44-43b3-b74f-d0ba01a861b0","Type":"ContainerDied","Data":"ce894f8334796fdbd85d158a44a057da3822ed76d9d4f803b57cd61d80aa3072"} Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.694490 4919 generic.go:334] "Generic (PLEG): container finished" podID="73f4afd2-691f-4749-b361-d99c9482a35b" containerID="8e4a7f4c5b308d4576d04e5760e2b30965b715f55a48bcc01dcef6902f526f93" exitCode=0 Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.694588 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66425" event={"ID":"73f4afd2-691f-4749-b361-d99c9482a35b","Type":"ContainerDied","Data":"8e4a7f4c5b308d4576d04e5760e2b30965b715f55a48bcc01dcef6902f526f93"} Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.694624 4919 scope.go:117] "RemoveContainer" containerID="e1a0c27c14757895b9f45718d0f9ff65a5755adbd5a99ea7fb4ae689244a039d" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.703472 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-catalog-content\") pod \"18b90207-0827-4db3-b0ca-e622b58ed504\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.703539 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-utilities\") pod \"18b90207-0827-4db3-b0ca-e622b58ed504\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.703606 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw6nc\" (UniqueName: \"kubernetes.io/projected/18b90207-0827-4db3-b0ca-e622b58ed504-kube-api-access-bw6nc\") pod \"18b90207-0827-4db3-b0ca-e622b58ed504\" (UID: \"18b90207-0827-4db3-b0ca-e622b58ed504\") " Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.705113 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-utilities" (OuterVolumeSpecName: "utilities") pod "18b90207-0827-4db3-b0ca-e622b58ed504" (UID: "18b90207-0827-4db3-b0ca-e622b58ed504"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.710481 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b90207-0827-4db3-b0ca-e622b58ed504-kube-api-access-bw6nc" (OuterVolumeSpecName: "kube-api-access-bw6nc") pod "18b90207-0827-4db3-b0ca-e622b58ed504" (UID: "18b90207-0827-4db3-b0ca-e622b58ed504"). InnerVolumeSpecName "kube-api-access-bw6nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.723428 4919 generic.go:334] "Generic (PLEG): container finished" podID="18b90207-0827-4db3-b0ca-e622b58ed504" containerID="bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7" exitCode=0 Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.723473 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tf6wk" event={"ID":"18b90207-0827-4db3-b0ca-e622b58ed504","Type":"ContainerDied","Data":"bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7"} Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.723520 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tf6wk" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.723519 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tf6wk" event={"ID":"18b90207-0827-4db3-b0ca-e622b58ed504","Type":"ContainerDied","Data":"0081ef18fd339fc3467d7a8da728dd0d69676520a0646cb80dbf43293864d1dc"} Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.786883 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18b90207-0827-4db3-b0ca-e622b58ed504" (UID: "18b90207-0827-4db3-b0ca-e622b58ed504"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.804739 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.804780 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18b90207-0827-4db3-b0ca-e622b58ed504-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.804797 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw6nc\" (UniqueName: \"kubernetes.io/projected/18b90207-0827-4db3-b0ca-e622b58ed504-kube-api-access-bw6nc\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.900845 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:36:24 crc kubenswrapper[4919]: I0109 13:36:24.981361 4919 scope.go:117] "RemoveContainer" containerID="bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.004490 4919 scope.go:117] "RemoveContainer" containerID="ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.007137 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-utilities\") pod \"691c6d86-b150-4576-872d-004862dcbd22\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.007229 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x6wj\" (UniqueName: \"kubernetes.io/projected/691c6d86-b150-4576-872d-004862dcbd22-kube-api-access-4x6wj\") pod \"691c6d86-b150-4576-872d-004862dcbd22\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.007314 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-catalog-content\") pod \"691c6d86-b150-4576-872d-004862dcbd22\" (UID: \"691c6d86-b150-4576-872d-004862dcbd22\") " Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.008661 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-utilities" (OuterVolumeSpecName: "utilities") pod "691c6d86-b150-4576-872d-004862dcbd22" (UID: "691c6d86-b150-4576-872d-004862dcbd22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.015378 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691c6d86-b150-4576-872d-004862dcbd22-kube-api-access-4x6wj" (OuterVolumeSpecName: "kube-api-access-4x6wj") pod "691c6d86-b150-4576-872d-004862dcbd22" (UID: "691c6d86-b150-4576-872d-004862dcbd22"). InnerVolumeSpecName "kube-api-access-4x6wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.027701 4919 scope.go:117] "RemoveContainer" containerID="cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.048115 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.064484 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.065382 4919 scope.go:117] "RemoveContainer" containerID="bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7" Jan 09 13:36:25 crc kubenswrapper[4919]: E0109 13:36:25.065737 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7\": container with ID starting with bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7 not found: ID does not exist" containerID="bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.065771 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7"} err="failed to get container status \"bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7\": rpc error: code = NotFound desc = could not find container \"bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7\": container with ID starting with bca9b19484feb458da710cb66e1f5719f17ae62b3f275870e52e3a5a465fbde7 not found: ID does not exist" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.065793 4919 scope.go:117] "RemoveContainer" containerID="ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2" Jan 09 13:36:25 crc kubenswrapper[4919]: E0109 13:36:25.066108 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2\": container with ID starting with ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2 not found: ID does not exist" containerID="ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.066136 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2"} err="failed to get container status \"ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2\": rpc error: code = NotFound desc = could not find container \"ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2\": container with ID starting with ba66c717efef0f74a13774a8cd8d5f615dd5caf50e19da6c10c5c98de9faa3f2 not found: ID does not exist" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.066154 4919 scope.go:117] "RemoveContainer" containerID="cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980" Jan 09 13:36:25 crc kubenswrapper[4919]: E0109 13:36:25.066430 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980\": container with ID starting with cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980 not found: ID does not exist" containerID="cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.066953 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980"} err="failed to get container status \"cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980\": rpc error: code = NotFound desc = could not find container \"cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980\": container with ID starting with cdd367965aaf5eaec588265b2955359992c1848f7c6d6daa152fe5101fbf3980 not found: ID does not exist" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.067991 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tf6wk"] Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.091054 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tf6wk"] Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.104672 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "691c6d86-b150-4576-872d-004862dcbd22" (UID: "691c6d86-b150-4576-872d-004862dcbd22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.109680 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-trusted-ca\") pod \"73f4afd2-691f-4749-b361-d99c9482a35b\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.109799 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-operator-metrics\") pod \"73f4afd2-691f-4749-b361-d99c9482a35b\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.109865 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h7gm\" (UniqueName: \"kubernetes.io/projected/73f4afd2-691f-4749-b361-d99c9482a35b-kube-api-access-4h7gm\") pod \"73f4afd2-691f-4749-b361-d99c9482a35b\" (UID: \"73f4afd2-691f-4749-b361-d99c9482a35b\") " Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.110323 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.110349 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x6wj\" (UniqueName: \"kubernetes.io/projected/691c6d86-b150-4576-872d-004862dcbd22-kube-api-access-4x6wj\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.110366 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c6d86-b150-4576-872d-004862dcbd22-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.110540 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "73f4afd2-691f-4749-b361-d99c9482a35b" (UID: "73f4afd2-691f-4749-b361-d99c9482a35b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.116548 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "73f4afd2-691f-4749-b361-d99c9482a35b" (UID: "73f4afd2-691f-4749-b361-d99c9482a35b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.118764 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73f4afd2-691f-4749-b361-d99c9482a35b-kube-api-access-4h7gm" (OuterVolumeSpecName: "kube-api-access-4h7gm") pod "73f4afd2-691f-4749-b361-d99c9482a35b" (UID: "73f4afd2-691f-4749-b361-d99c9482a35b"). InnerVolumeSpecName "kube-api-access-4h7gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.211074 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-catalog-content\") pod \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.211149 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-utilities\") pod \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.211237 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c827\" (UniqueName: \"kubernetes.io/projected/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-kube-api-access-8c827\") pod \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\" (UID: \"3bdb482c-0d44-43b3-b74f-d0ba01a861b0\") " Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.212084 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-utilities" (OuterVolumeSpecName: "utilities") pod "3bdb482c-0d44-43b3-b74f-d0ba01a861b0" (UID: "3bdb482c-0d44-43b3-b74f-d0ba01a861b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.212163 4919 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.212189 4919 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73f4afd2-691f-4749-b361-d99c9482a35b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.212201 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h7gm\" (UniqueName: \"kubernetes.io/projected/73f4afd2-691f-4749-b361-d99c9482a35b-kube-api-access-4h7gm\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.215408 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-kube-api-access-8c827" (OuterVolumeSpecName: "kube-api-access-8c827") pod "3bdb482c-0d44-43b3-b74f-d0ba01a861b0" (UID: "3bdb482c-0d44-43b3-b74f-d0ba01a861b0"). InnerVolumeSpecName "kube-api-access-8c827". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.236625 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bdb482c-0d44-43b3-b74f-d0ba01a861b0" (UID: "3bdb482c-0d44-43b3-b74f-d0ba01a861b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.243073 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-46q7s"] Jan 09 13:36:25 crc kubenswrapper[4919]: W0109 13:36:25.258982 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1290e54_d4c8_4911_a121_762fffa39a66.slice/crio-99e2e7f60c351f8c1ea8f782b3fc9c30fb7fe6018b33ab05a6d96983daf9e853 WatchSource:0}: Error finding container 99e2e7f60c351f8c1ea8f782b3fc9c30fb7fe6018b33ab05a6d96983daf9e853: Status 404 returned error can't find the container with id 99e2e7f60c351f8c1ea8f782b3fc9c30fb7fe6018b33ab05a6d96983daf9e853 Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.316078 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.316123 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.316137 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c827\" (UniqueName: \"kubernetes.io/projected/3bdb482c-0d44-43b3-b74f-d0ba01a861b0-kube-api-access-8c827\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.732844 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj7bg" event={"ID":"3bdb482c-0d44-43b3-b74f-d0ba01a861b0","Type":"ContainerDied","Data":"8b5e9b16a497b0be5aabb5f4fb0285fa1d3db691dde692d4733152f7292fe2c9"} Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.732869 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj7bg" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.733360 4919 scope.go:117] "RemoveContainer" containerID="ce894f8334796fdbd85d158a44a057da3822ed76d9d4f803b57cd61d80aa3072" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.735290 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66425" event={"ID":"73f4afd2-691f-4749-b361-d99c9482a35b","Type":"ContainerDied","Data":"7df54f6227f15b52f9e4267ec772b2578bd1504091bc88e0429ee94bd0f69e66"} Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.735322 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66425" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.738356 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" event={"ID":"c1290e54-d4c8-4911-a121-762fffa39a66","Type":"ContainerStarted","Data":"e9da0ccc79fbad61f261ed9640d43c548bcd9fede9c7a1305ddaa13e0b629935"} Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.738398 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" event={"ID":"c1290e54-d4c8-4911-a121-762fffa39a66","Type":"ContainerStarted","Data":"99e2e7f60c351f8c1ea8f782b3fc9c30fb7fe6018b33ab05a6d96983daf9e853"} Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.738586 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.743090 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.747996 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvr9v" event={"ID":"691c6d86-b150-4576-872d-004862dcbd22","Type":"ContainerDied","Data":"68df81d59326066e2f6879ebe673d832e7c6eb2834f4160867b87ecdc5973c27"} Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.748195 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvr9v" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.753329 4919 generic.go:334] "Generic (PLEG): container finished" podID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerID="20569914b88b8dffb208f8d743645f26ec49cb7f5ad5daf956087ce43e69dc76" exitCode=0 Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.753376 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qx45q" event={"ID":"1ce56338-b322-46a4-b02c-2ae2b1bb5149","Type":"ContainerDied","Data":"20569914b88b8dffb208f8d743645f26ec49cb7f5ad5daf956087ce43e69dc76"} Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.754285 4919 scope.go:117] "RemoveContainer" containerID="adaea68e367263aa61cc347ceaede1e277478c4cba4fb116fe255889cbd9dd49" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.769870 4919 scope.go:117] "RemoveContainer" containerID="2af8b4fc83afa54c3df14d7350fb5fc00803269ea6194b0b1d3e889612603c63" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.790449 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-46q7s" podStartSLOduration=1.7904276000000001 podStartE2EDuration="1.7904276s" podCreationTimestamp="2026-01-09 13:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:36:25.76974086 +0000 UTC m=+365.317580310" watchObservedRunningTime="2026-01-09 13:36:25.7904276 +0000 UTC m=+365.338267050" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.794552 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj7bg"] Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.796904 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj7bg"] Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.800325 4919 scope.go:117] "RemoveContainer" containerID="8e4a7f4c5b308d4576d04e5760e2b30965b715f55a48bcc01dcef6902f526f93" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.832562 4919 scope.go:117] "RemoveContainer" containerID="5aa0e405a0e9a962dc34bb62238a982e277577db22fe26419780420e7db19630" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.838254 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66425"] Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.841935 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66425"] Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.845932 4919 scope.go:117] "RemoveContainer" containerID="8126e5f1479e8f4d893c01ae9917d30a35e74befe2a19c7444efa00f29783554" Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.855781 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvr9v"] Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.860762 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xvr9v"] Jan 09 13:36:25 crc kubenswrapper[4919]: I0109 13:36:25.866926 4919 scope.go:117] "RemoveContainer" containerID="44becb4d954ccf4f665c325cc948283db62c12647e6e12814d994579541fe866" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.277183 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.437283 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-catalog-content\") pod \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.437405 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-utilities\") pod \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.437500 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfzzn\" (UniqueName: \"kubernetes.io/projected/1ce56338-b322-46a4-b02c-2ae2b1bb5149-kube-api-access-lfzzn\") pod \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\" (UID: \"1ce56338-b322-46a4-b02c-2ae2b1bb5149\") " Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.439951 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-utilities" (OuterVolumeSpecName: "utilities") pod "1ce56338-b322-46a4-b02c-2ae2b1bb5149" (UID: "1ce56338-b322-46a4-b02c-2ae2b1bb5149"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.445943 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce56338-b322-46a4-b02c-2ae2b1bb5149-kube-api-access-lfzzn" (OuterVolumeSpecName: "kube-api-access-lfzzn") pod "1ce56338-b322-46a4-b02c-2ae2b1bb5149" (UID: "1ce56338-b322-46a4-b02c-2ae2b1bb5149"). InnerVolumeSpecName "kube-api-access-lfzzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.539248 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.539343 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfzzn\" (UniqueName: \"kubernetes.io/projected/1ce56338-b322-46a4-b02c-2ae2b1bb5149-kube-api-access-lfzzn\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.591286 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ce56338-b322-46a4-b02c-2ae2b1bb5149" (UID: "1ce56338-b322-46a4-b02c-2ae2b1bb5149"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.640746 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce56338-b322-46a4-b02c-2ae2b1bb5149-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.758414 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" path="/var/lib/kubelet/pods/18b90207-0827-4db3-b0ca-e622b58ed504/volumes" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.759006 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" path="/var/lib/kubelet/pods/3bdb482c-0d44-43b3-b74f-d0ba01a861b0/volumes" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.759632 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="691c6d86-b150-4576-872d-004862dcbd22" path="/var/lib/kubelet/pods/691c6d86-b150-4576-872d-004862dcbd22/volumes" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.760677 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73f4afd2-691f-4749-b361-d99c9482a35b" path="/var/lib/kubelet/pods/73f4afd2-691f-4749-b361-d99c9482a35b/volumes" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.764688 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qx45q" event={"ID":"1ce56338-b322-46a4-b02c-2ae2b1bb5149","Type":"ContainerDied","Data":"1eb668111ef4606bc172dff2cc3a4ce919fa19b634efbbb11ded33e9036ab463"} Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.764720 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qx45q" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.764736 4919 scope.go:117] "RemoveContainer" containerID="20569914b88b8dffb208f8d743645f26ec49cb7f5ad5daf956087ce43e69dc76" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.798890 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qx45q"] Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.801821 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qx45q"] Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.802090 4919 scope.go:117] "RemoveContainer" containerID="70c4800f54b9cea09c32e486f293d325207195eaee1db1117bfac6ad89c5a551" Jan 09 13:36:26 crc kubenswrapper[4919]: I0109 13:36:26.819557 4919 scope.go:117] "RemoveContainer" containerID="0a5dd65feb99b9dee93abebdf65fafaa2eac727ee8729c94986e69272873098f" Jan 09 13:36:27 crc kubenswrapper[4919]: I0109 13:36:27.154159 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-78lhl" Jan 09 13:36:27 crc kubenswrapper[4919]: I0109 13:36:27.197566 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttgps"] Jan 09 13:36:28 crc kubenswrapper[4919]: I0109 13:36:28.759013 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" path="/var/lib/kubelet/pods/1ce56338-b322-46a4-b02c-2ae2b1bb5149/volumes" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.581313 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9lhnw"] Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582732 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerName="extract-utilities" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582748 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerName="extract-utilities" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582757 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691c6d86-b150-4576-872d-004862dcbd22" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582764 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="691c6d86-b150-4576-872d-004862dcbd22" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582775 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerName="extract-content" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582782 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerName="extract-content" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582793 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerName="extract-content" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582799 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerName="extract-content" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582808 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691c6d86-b150-4576-872d-004862dcbd22" containerName="extract-content" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582815 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="691c6d86-b150-4576-872d-004862dcbd22" containerName="extract-content" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582824 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582831 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582840 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582846 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582859 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerName="extract-utilities" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582867 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerName="extract-utilities" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582875 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f4afd2-691f-4749-b361-d99c9482a35b" containerName="marketplace-operator" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582884 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f4afd2-691f-4749-b361-d99c9482a35b" containerName="marketplace-operator" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582895 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691c6d86-b150-4576-872d-004862dcbd22" containerName="extract-utilities" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582902 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="691c6d86-b150-4576-872d-004862dcbd22" containerName="extract-utilities" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582908 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582914 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582921 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" containerName="extract-utilities" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582926 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" containerName="extract-utilities" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582935 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" containerName="extract-content" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582942 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" containerName="extract-content" Jan 09 13:36:32 crc kubenswrapper[4919]: E0109 13:36:32.582952 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f4afd2-691f-4749-b361-d99c9482a35b" containerName="marketplace-operator" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.582959 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f4afd2-691f-4749-b361-d99c9482a35b" containerName="marketplace-operator" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.583068 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="691c6d86-b150-4576-872d-004862dcbd22" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.583082 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="73f4afd2-691f-4749-b361-d99c9482a35b" containerName="marketplace-operator" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.583090 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="73f4afd2-691f-4749-b361-d99c9482a35b" containerName="marketplace-operator" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.583096 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdb482c-0d44-43b3-b74f-d0ba01a861b0" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.583106 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b90207-0827-4db3-b0ca-e622b58ed504" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.583125 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce56338-b322-46a4-b02c-2ae2b1bb5149" containerName="registry-server" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.584057 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.586741 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.588226 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9lhnw"] Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.737222 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92de8a52-6be3-4b9d-9f02-337282f2cc79-catalog-content\") pod \"certified-operators-9lhnw\" (UID: \"92de8a52-6be3-4b9d-9f02-337282f2cc79\") " pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.737592 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92de8a52-6be3-4b9d-9f02-337282f2cc79-utilities\") pod \"certified-operators-9lhnw\" (UID: \"92de8a52-6be3-4b9d-9f02-337282f2cc79\") " pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.737741 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb4hl\" (UniqueName: \"kubernetes.io/projected/92de8a52-6be3-4b9d-9f02-337282f2cc79-kube-api-access-tb4hl\") pod \"certified-operators-9lhnw\" (UID: \"92de8a52-6be3-4b9d-9f02-337282f2cc79\") " pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.778736 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-crxrx"] Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.781324 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.784957 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.793737 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-crxrx"] Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.839270 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92de8a52-6be3-4b9d-9f02-337282f2cc79-catalog-content\") pod \"certified-operators-9lhnw\" (UID: \"92de8a52-6be3-4b9d-9f02-337282f2cc79\") " pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.839314 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92de8a52-6be3-4b9d-9f02-337282f2cc79-utilities\") pod \"certified-operators-9lhnw\" (UID: \"92de8a52-6be3-4b9d-9f02-337282f2cc79\") " pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.839350 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb4hl\" (UniqueName: \"kubernetes.io/projected/92de8a52-6be3-4b9d-9f02-337282f2cc79-kube-api-access-tb4hl\") pod \"certified-operators-9lhnw\" (UID: \"92de8a52-6be3-4b9d-9f02-337282f2cc79\") " pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.839835 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92de8a52-6be3-4b9d-9f02-337282f2cc79-catalog-content\") pod \"certified-operators-9lhnw\" (UID: \"92de8a52-6be3-4b9d-9f02-337282f2cc79\") " pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.839835 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92de8a52-6be3-4b9d-9f02-337282f2cc79-utilities\") pod \"certified-operators-9lhnw\" (UID: \"92de8a52-6be3-4b9d-9f02-337282f2cc79\") " pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.859030 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb4hl\" (UniqueName: \"kubernetes.io/projected/92de8a52-6be3-4b9d-9f02-337282f2cc79-kube-api-access-tb4hl\") pod \"certified-operators-9lhnw\" (UID: \"92de8a52-6be3-4b9d-9f02-337282f2cc79\") " pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.921813 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.940733 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhdc\" (UniqueName: \"kubernetes.io/projected/1969176c-2e40-4b30-9364-994e7f6d99e2-kube-api-access-slhdc\") pod \"community-operators-crxrx\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.941027 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-catalog-content\") pod \"community-operators-crxrx\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:32 crc kubenswrapper[4919]: I0109 13:36:32.941059 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-utilities\") pod \"community-operators-crxrx\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.042110 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-utilities\") pod \"community-operators-crxrx\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.042204 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slhdc\" (UniqueName: \"kubernetes.io/projected/1969176c-2e40-4b30-9364-994e7f6d99e2-kube-api-access-slhdc\") pod \"community-operators-crxrx\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.042278 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-catalog-content\") pod \"community-operators-crxrx\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.044575 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-catalog-content\") pod \"community-operators-crxrx\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.044611 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-utilities\") pod \"community-operators-crxrx\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.064534 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slhdc\" (UniqueName: \"kubernetes.io/projected/1969176c-2e40-4b30-9364-994e7f6d99e2-kube-api-access-slhdc\") pod \"community-operators-crxrx\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.096536 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.308141 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9lhnw"] Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.490774 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-crxrx"] Jan 09 13:36:33 crc kubenswrapper[4919]: W0109 13:36:33.515791 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1969176c_2e40_4b30_9364_994e7f6d99e2.slice/crio-642ab87587b342f2f9e963ccc600685e73d2a546ce5cef82fbfbe6a4ebadc3da WatchSource:0}: Error finding container 642ab87587b342f2f9e963ccc600685e73d2a546ce5cef82fbfbe6a4ebadc3da: Status 404 returned error can't find the container with id 642ab87587b342f2f9e963ccc600685e73d2a546ce5cef82fbfbe6a4ebadc3da Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.820018 4919 generic.go:334] "Generic (PLEG): container finished" podID="92de8a52-6be3-4b9d-9f02-337282f2cc79" containerID="00482b13a57a3312f5b45648c23cd9b0a7adce6ca16db87e8ab03c57200fa0c4" exitCode=0 Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.820075 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lhnw" event={"ID":"92de8a52-6be3-4b9d-9f02-337282f2cc79","Type":"ContainerDied","Data":"00482b13a57a3312f5b45648c23cd9b0a7adce6ca16db87e8ab03c57200fa0c4"} Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.820115 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lhnw" event={"ID":"92de8a52-6be3-4b9d-9f02-337282f2cc79","Type":"ContainerStarted","Data":"a34f1af65f1e8dee37f2f91a7987063b7349b3b82f1293382694917f9d01bd2a"} Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.822496 4919 generic.go:334] "Generic (PLEG): container finished" podID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerID="1bc90cb7f258bca0e94560b385fd292dd4ee92b24a3ddf4d03e9eed58e62c7a2" exitCode=0 Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.822622 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crxrx" event={"ID":"1969176c-2e40-4b30-9364-994e7f6d99e2","Type":"ContainerDied","Data":"1bc90cb7f258bca0e94560b385fd292dd4ee92b24a3ddf4d03e9eed58e62c7a2"} Jan 09 13:36:33 crc kubenswrapper[4919]: I0109 13:36:33.822660 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crxrx" event={"ID":"1969176c-2e40-4b30-9364-994e7f6d99e2","Type":"ContainerStarted","Data":"642ab87587b342f2f9e963ccc600685e73d2a546ce5cef82fbfbe6a4ebadc3da"} Jan 09 13:36:34 crc kubenswrapper[4919]: I0109 13:36:34.830057 4919 generic.go:334] "Generic (PLEG): container finished" podID="92de8a52-6be3-4b9d-9f02-337282f2cc79" containerID="a013ee6b7fba99f8a37cb1da46e50592022e6608936be996c75ea175d6474305" exitCode=0 Jan 09 13:36:34 crc kubenswrapper[4919]: I0109 13:36:34.830515 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lhnw" event={"ID":"92de8a52-6be3-4b9d-9f02-337282f2cc79","Type":"ContainerDied","Data":"a013ee6b7fba99f8a37cb1da46e50592022e6608936be996c75ea175d6474305"} Jan 09 13:36:34 crc kubenswrapper[4919]: I0109 13:36:34.832566 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crxrx" event={"ID":"1969176c-2e40-4b30-9364-994e7f6d99e2","Type":"ContainerStarted","Data":"e6116e44a7eb8b6e2814f3b7ef6e29ffc3d24213552c3e5eb3666df0ccaea9ec"} Jan 09 13:36:34 crc kubenswrapper[4919]: E0109 13:36:34.946507 4919 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1969176c_2e40_4b30_9364_994e7f6d99e2.slice/crio-conmon-e6116e44a7eb8b6e2814f3b7ef6e29ffc3d24213552c3e5eb3666df0ccaea9ec.scope\": RecentStats: unable to find data in memory cache]" Jan 09 13:36:34 crc kubenswrapper[4919]: I0109 13:36:34.977930 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ktjzh"] Jan 09 13:36:34 crc kubenswrapper[4919]: I0109 13:36:34.979202 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:34 crc kubenswrapper[4919]: I0109 13:36:34.981576 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 09 13:36:34 crc kubenswrapper[4919]: I0109 13:36:34.988570 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktjzh"] Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.071859 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgvfj\" (UniqueName: \"kubernetes.io/projected/d97889ab-f1bb-4d3c-bf02-c037c00ae3e6-kube-api-access-tgvfj\") pod \"redhat-marketplace-ktjzh\" (UID: \"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6\") " pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.072017 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97889ab-f1bb-4d3c-bf02-c037c00ae3e6-catalog-content\") pod \"redhat-marketplace-ktjzh\" (UID: \"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6\") " pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.072043 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97889ab-f1bb-4d3c-bf02-c037c00ae3e6-utilities\") pod \"redhat-marketplace-ktjzh\" (UID: \"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6\") " pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.179857 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97889ab-f1bb-4d3c-bf02-c037c00ae3e6-catalog-content\") pod \"redhat-marketplace-ktjzh\" (UID: \"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6\") " pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.179998 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97889ab-f1bb-4d3c-bf02-c037c00ae3e6-utilities\") pod \"redhat-marketplace-ktjzh\" (UID: \"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6\") " pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.180309 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgvfj\" (UniqueName: \"kubernetes.io/projected/d97889ab-f1bb-4d3c-bf02-c037c00ae3e6-kube-api-access-tgvfj\") pod \"redhat-marketplace-ktjzh\" (UID: \"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6\") " pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.180559 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97889ab-f1bb-4d3c-bf02-c037c00ae3e6-catalog-content\") pod \"redhat-marketplace-ktjzh\" (UID: \"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6\") " pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.180659 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97889ab-f1bb-4d3c-bf02-c037c00ae3e6-utilities\") pod \"redhat-marketplace-ktjzh\" (UID: \"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6\") " pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.200263 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zp794"] Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.201947 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.204796 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.208469 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgvfj\" (UniqueName: \"kubernetes.io/projected/d97889ab-f1bb-4d3c-bf02-c037c00ae3e6-kube-api-access-tgvfj\") pod \"redhat-marketplace-ktjzh\" (UID: \"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6\") " pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.213166 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zp794"] Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.281451 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfk9z\" (UniqueName: \"kubernetes.io/projected/03961396-0471-4105-a027-ac6ae244d150-kube-api-access-sfk9z\") pod \"redhat-operators-zp794\" (UID: \"03961396-0471-4105-a027-ac6ae244d150\") " pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.281529 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03961396-0471-4105-a027-ac6ae244d150-catalog-content\") pod \"redhat-operators-zp794\" (UID: \"03961396-0471-4105-a027-ac6ae244d150\") " pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.281655 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03961396-0471-4105-a027-ac6ae244d150-utilities\") pod \"redhat-operators-zp794\" (UID: \"03961396-0471-4105-a027-ac6ae244d150\") " pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.383417 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfk9z\" (UniqueName: \"kubernetes.io/projected/03961396-0471-4105-a027-ac6ae244d150-kube-api-access-sfk9z\") pod \"redhat-operators-zp794\" (UID: \"03961396-0471-4105-a027-ac6ae244d150\") " pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.383512 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03961396-0471-4105-a027-ac6ae244d150-catalog-content\") pod \"redhat-operators-zp794\" (UID: \"03961396-0471-4105-a027-ac6ae244d150\") " pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.383602 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03961396-0471-4105-a027-ac6ae244d150-utilities\") pod \"redhat-operators-zp794\" (UID: \"03961396-0471-4105-a027-ac6ae244d150\") " pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.384406 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03961396-0471-4105-a027-ac6ae244d150-utilities\") pod \"redhat-operators-zp794\" (UID: \"03961396-0471-4105-a027-ac6ae244d150\") " pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.384451 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03961396-0471-4105-a027-ac6ae244d150-catalog-content\") pod \"redhat-operators-zp794\" (UID: \"03961396-0471-4105-a027-ac6ae244d150\") " pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.402915 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfk9z\" (UniqueName: \"kubernetes.io/projected/03961396-0471-4105-a027-ac6ae244d150-kube-api-access-sfk9z\") pod \"redhat-operators-zp794\" (UID: \"03961396-0471-4105-a027-ac6ae244d150\") " pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.850364 4919 generic.go:334] "Generic (PLEG): container finished" podID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerID="e6116e44a7eb8b6e2814f3b7ef6e29ffc3d24213552c3e5eb3666df0ccaea9ec" exitCode=0 Jan 09 13:36:35 crc kubenswrapper[4919]: I0109 13:36:35.850402 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crxrx" event={"ID":"1969176c-2e40-4b30-9364-994e7f6d99e2","Type":"ContainerDied","Data":"e6116e44a7eb8b6e2814f3b7ef6e29ffc3d24213552c3e5eb3666df0ccaea9ec"} Jan 09 13:36:36 crc kubenswrapper[4919]: I0109 13:36:36.329318 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:36 crc kubenswrapper[4919]: I0109 13:36:36.374562 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:36 crc kubenswrapper[4919]: I0109 13:36:36.728221 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktjzh"] Jan 09 13:36:36 crc kubenswrapper[4919]: W0109 13:36:36.737542 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd97889ab_f1bb_4d3c_bf02_c037c00ae3e6.slice/crio-0c8bc5d2cd0753fc9b45f4fc754f72b729dc25a54116051944f3560d7503ba1a WatchSource:0}: Error finding container 0c8bc5d2cd0753fc9b45f4fc754f72b729dc25a54116051944f3560d7503ba1a: Status 404 returned error can't find the container with id 0c8bc5d2cd0753fc9b45f4fc754f72b729dc25a54116051944f3560d7503ba1a Jan 09 13:36:36 crc kubenswrapper[4919]: I0109 13:36:36.859511 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crxrx" event={"ID":"1969176c-2e40-4b30-9364-994e7f6d99e2","Type":"ContainerStarted","Data":"931ac0034b628e0797fa1e0345aafea2cb48f06495f87c6475c0bf36f242ad42"} Jan 09 13:36:36 crc kubenswrapper[4919]: I0109 13:36:36.860451 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktjzh" event={"ID":"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6","Type":"ContainerStarted","Data":"0c8bc5d2cd0753fc9b45f4fc754f72b729dc25a54116051944f3560d7503ba1a"} Jan 09 13:36:36 crc kubenswrapper[4919]: I0109 13:36:36.863024 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lhnw" event={"ID":"92de8a52-6be3-4b9d-9f02-337282f2cc79","Type":"ContainerStarted","Data":"19a21d8aa5df56b0a3b4cfe2fea2f42183fdd500f8321177d33f49594c9530ae"} Jan 09 13:36:36 crc kubenswrapper[4919]: I0109 13:36:36.878935 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-crxrx" podStartSLOduration=2.093839137 podStartE2EDuration="4.878918032s" podCreationTimestamp="2026-01-09 13:36:32 +0000 UTC" firstStartedPulling="2026-01-09 13:36:33.823880607 +0000 UTC m=+373.371720057" lastFinishedPulling="2026-01-09 13:36:36.608959502 +0000 UTC m=+376.156798952" observedRunningTime="2026-01-09 13:36:36.875939188 +0000 UTC m=+376.423778638" watchObservedRunningTime="2026-01-09 13:36:36.878918032 +0000 UTC m=+376.426757482" Jan 09 13:36:36 crc kubenswrapper[4919]: I0109 13:36:36.886397 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zp794"] Jan 09 13:36:36 crc kubenswrapper[4919]: W0109 13:36:36.890402 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03961396_0471_4105_a027_ac6ae244d150.slice/crio-cf845c1cfc971713244cec547bb71c62ab4a03547358b9090c6b3dd2df926af9 WatchSource:0}: Error finding container cf845c1cfc971713244cec547bb71c62ab4a03547358b9090c6b3dd2df926af9: Status 404 returned error can't find the container with id cf845c1cfc971713244cec547bb71c62ab4a03547358b9090c6b3dd2df926af9 Jan 09 13:36:36 crc kubenswrapper[4919]: I0109 13:36:36.896364 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9lhnw" podStartSLOduration=3.3853783330000002 podStartE2EDuration="4.89634928s" podCreationTimestamp="2026-01-09 13:36:32 +0000 UTC" firstStartedPulling="2026-01-09 13:36:33.821465636 +0000 UTC m=+373.369305086" lastFinishedPulling="2026-01-09 13:36:35.332436583 +0000 UTC m=+374.880276033" observedRunningTime="2026-01-09 13:36:36.89632643 +0000 UTC m=+376.444165880" watchObservedRunningTime="2026-01-09 13:36:36.89634928 +0000 UTC m=+376.444188730" Jan 09 13:36:37 crc kubenswrapper[4919]: I0109 13:36:37.876051 4919 generic.go:334] "Generic (PLEG): container finished" podID="d97889ab-f1bb-4d3c-bf02-c037c00ae3e6" containerID="27d77c626cd7c4027132f26dff0bd5cc5e2726a1746080b5dfb95b4a3e6a056f" exitCode=0 Jan 09 13:36:37 crc kubenswrapper[4919]: I0109 13:36:37.876153 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktjzh" event={"ID":"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6","Type":"ContainerDied","Data":"27d77c626cd7c4027132f26dff0bd5cc5e2726a1746080b5dfb95b4a3e6a056f"} Jan 09 13:36:37 crc kubenswrapper[4919]: I0109 13:36:37.879199 4919 generic.go:334] "Generic (PLEG): container finished" podID="03961396-0471-4105-a027-ac6ae244d150" containerID="fdd888e5c3e2bfabe8cc0bc9c676a67f7ee2847b9207256daa968550b3f3e60c" exitCode=0 Jan 09 13:36:37 crc kubenswrapper[4919]: I0109 13:36:37.879233 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zp794" event={"ID":"03961396-0471-4105-a027-ac6ae244d150","Type":"ContainerDied","Data":"fdd888e5c3e2bfabe8cc0bc9c676a67f7ee2847b9207256daa968550b3f3e60c"} Jan 09 13:36:37 crc kubenswrapper[4919]: I0109 13:36:37.879293 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zp794" event={"ID":"03961396-0471-4105-a027-ac6ae244d150","Type":"ContainerStarted","Data":"cf845c1cfc971713244cec547bb71c62ab4a03547358b9090c6b3dd2df926af9"} Jan 09 13:36:39 crc kubenswrapper[4919]: I0109 13:36:39.899157 4919 generic.go:334] "Generic (PLEG): container finished" podID="03961396-0471-4105-a027-ac6ae244d150" containerID="d10516ebdc5cc0a295e0330e54078609d8152a91e5a5e51411cd584bd9255a9c" exitCode=0 Jan 09 13:36:39 crc kubenswrapper[4919]: I0109 13:36:39.899251 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zp794" event={"ID":"03961396-0471-4105-a027-ac6ae244d150","Type":"ContainerDied","Data":"d10516ebdc5cc0a295e0330e54078609d8152a91e5a5e51411cd584bd9255a9c"} Jan 09 13:36:39 crc kubenswrapper[4919]: I0109 13:36:39.905285 4919 generic.go:334] "Generic (PLEG): container finished" podID="d97889ab-f1bb-4d3c-bf02-c037c00ae3e6" containerID="551ce45e791e527ac609a1fb706ec9de9d3e40e0fd9776331cfe56b403c1724b" exitCode=0 Jan 09 13:36:39 crc kubenswrapper[4919]: I0109 13:36:39.905314 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktjzh" event={"ID":"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6","Type":"ContainerDied","Data":"551ce45e791e527ac609a1fb706ec9de9d3e40e0fd9776331cfe56b403c1724b"} Jan 09 13:36:41 crc kubenswrapper[4919]: I0109 13:36:41.922402 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zp794" event={"ID":"03961396-0471-4105-a027-ac6ae244d150","Type":"ContainerStarted","Data":"3b99b781f4e39a8fcdd9f33a1a3cbc91192af99711d03256da60594dd9498e5e"} Jan 09 13:36:41 crc kubenswrapper[4919]: I0109 13:36:41.926393 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktjzh" event={"ID":"d97889ab-f1bb-4d3c-bf02-c037c00ae3e6","Type":"ContainerStarted","Data":"dd32daeb84de1ed7c5317e610784b4a61296a3df218965ebcf6314aab6a6a7b9"} Jan 09 13:36:41 crc kubenswrapper[4919]: I0109 13:36:41.945166 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zp794" podStartSLOduration=3.888991523 podStartE2EDuration="6.945149009s" podCreationTimestamp="2026-01-09 13:36:35 +0000 UTC" firstStartedPulling="2026-01-09 13:36:37.880135017 +0000 UTC m=+377.427974467" lastFinishedPulling="2026-01-09 13:36:40.936292503 +0000 UTC m=+380.484131953" observedRunningTime="2026-01-09 13:36:41.943302682 +0000 UTC m=+381.491142132" watchObservedRunningTime="2026-01-09 13:36:41.945149009 +0000 UTC m=+381.492988449" Jan 09 13:36:41 crc kubenswrapper[4919]: I0109 13:36:41.967183 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ktjzh" podStartSLOduration=4.872351666 podStartE2EDuration="7.967164512s" podCreationTimestamp="2026-01-09 13:36:34 +0000 UTC" firstStartedPulling="2026-01-09 13:36:37.877511461 +0000 UTC m=+377.425350911" lastFinishedPulling="2026-01-09 13:36:40.972324307 +0000 UTC m=+380.520163757" observedRunningTime="2026-01-09 13:36:41.963776837 +0000 UTC m=+381.511616317" watchObservedRunningTime="2026-01-09 13:36:41.967164512 +0000 UTC m=+381.515003962" Jan 09 13:36:42 crc kubenswrapper[4919]: I0109 13:36:42.922228 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:42 crc kubenswrapper[4919]: I0109 13:36:42.922546 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:42 crc kubenswrapper[4919]: I0109 13:36:42.992104 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:43 crc kubenswrapper[4919]: I0109 13:36:43.028566 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9lhnw" Jan 09 13:36:43 crc kubenswrapper[4919]: I0109 13:36:43.096817 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:43 crc kubenswrapper[4919]: I0109 13:36:43.096858 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:43 crc kubenswrapper[4919]: I0109 13:36:43.130233 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:43 crc kubenswrapper[4919]: I0109 13:36:43.981663 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:36:46 crc kubenswrapper[4919]: I0109 13:36:46.330283 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:46 crc kubenswrapper[4919]: I0109 13:36:46.330661 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:46 crc kubenswrapper[4919]: I0109 13:36:46.368691 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:46 crc kubenswrapper[4919]: I0109 13:36:46.375649 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:46 crc kubenswrapper[4919]: I0109 13:36:46.375705 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:47 crc kubenswrapper[4919]: I0109 13:36:47.001306 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ktjzh" Jan 09 13:36:47 crc kubenswrapper[4919]: I0109 13:36:47.417045 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zp794" podUID="03961396-0471-4105-a027-ac6ae244d150" containerName="registry-server" probeResult="failure" output=< Jan 09 13:36:47 crc kubenswrapper[4919]: timeout: failed to connect service ":50051" within 1s Jan 09 13:36:47 crc kubenswrapper[4919]: > Jan 09 13:36:51 crc kubenswrapper[4919]: I0109 13:36:51.246993 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:36:51 crc kubenswrapper[4919]: I0109 13:36:51.247320 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.274425 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" podUID="d283d70b-0dbe-4059-aa3a-f05d029cb3ab" containerName="registry" containerID="cri-o://f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d" gracePeriod=30 Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.649136 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.681212 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.681267 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-bound-sa-token\") pod \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.681345 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-ca-trust-extracted\") pod \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.681386 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-certificates\") pod \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.681428 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-trusted-ca\") pod \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.681447 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-tls\") pod \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.681466 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4rj4\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-kube-api-access-h4rj4\") pod \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.681494 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-installation-pull-secrets\") pod \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\" (UID: \"d283d70b-0dbe-4059-aa3a-f05d029cb3ab\") " Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.682451 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d283d70b-0dbe-4059-aa3a-f05d029cb3ab" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.682877 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d283d70b-0dbe-4059-aa3a-f05d029cb3ab" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.691840 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d283d70b-0dbe-4059-aa3a-f05d029cb3ab" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.692131 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d283d70b-0dbe-4059-aa3a-f05d029cb3ab" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.693471 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-kube-api-access-h4rj4" (OuterVolumeSpecName: "kube-api-access-h4rj4") pod "d283d70b-0dbe-4059-aa3a-f05d029cb3ab" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab"). InnerVolumeSpecName "kube-api-access-h4rj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.693738 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d283d70b-0dbe-4059-aa3a-f05d029cb3ab" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.702505 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d283d70b-0dbe-4059-aa3a-f05d029cb3ab" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.704543 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d283d70b-0dbe-4059-aa3a-f05d029cb3ab" (UID: "d283d70b-0dbe-4059-aa3a-f05d029cb3ab"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.782992 4919 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.783020 4919 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.783030 4919 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.783038 4919 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.783047 4919 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.783056 4919 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:52 crc kubenswrapper[4919]: I0109 13:36:52.783064 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4rj4\" (UniqueName: \"kubernetes.io/projected/d283d70b-0dbe-4059-aa3a-f05d029cb3ab-kube-api-access-h4rj4\") on node \"crc\" DevicePath \"\"" Jan 09 13:36:53 crc kubenswrapper[4919]: I0109 13:36:53.000884 4919 generic.go:334] "Generic (PLEG): container finished" podID="d283d70b-0dbe-4059-aa3a-f05d029cb3ab" containerID="f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d" exitCode=0 Jan 09 13:36:53 crc kubenswrapper[4919]: I0109 13:36:53.000930 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" event={"ID":"d283d70b-0dbe-4059-aa3a-f05d029cb3ab","Type":"ContainerDied","Data":"f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d"} Jan 09 13:36:53 crc kubenswrapper[4919]: I0109 13:36:53.000966 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" event={"ID":"d283d70b-0dbe-4059-aa3a-f05d029cb3ab","Type":"ContainerDied","Data":"f42c2d8f41ebbc1931edb59d152eea7bc01950371a0654dc7544e54c403c3463"} Jan 09 13:36:53 crc kubenswrapper[4919]: I0109 13:36:53.000999 4919 scope.go:117] "RemoveContainer" containerID="f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d" Jan 09 13:36:53 crc kubenswrapper[4919]: I0109 13:36:53.001034 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ttgps" Jan 09 13:36:53 crc kubenswrapper[4919]: I0109 13:36:53.024736 4919 scope.go:117] "RemoveContainer" containerID="f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d" Jan 09 13:36:53 crc kubenswrapper[4919]: E0109 13:36:53.026866 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d\": container with ID starting with f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d not found: ID does not exist" containerID="f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d" Jan 09 13:36:53 crc kubenswrapper[4919]: I0109 13:36:53.026923 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d"} err="failed to get container status \"f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d\": rpc error: code = NotFound desc = could not find container \"f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d\": container with ID starting with f2c73087f3dc12c4832c1da12fd7fe5274b5680013f5bebc30dd06b8c762cc9d not found: ID does not exist" Jan 09 13:36:53 crc kubenswrapper[4919]: I0109 13:36:53.029637 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttgps"] Jan 09 13:36:53 crc kubenswrapper[4919]: I0109 13:36:53.034404 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttgps"] Jan 09 13:36:54 crc kubenswrapper[4919]: I0109 13:36:54.762478 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d283d70b-0dbe-4059-aa3a-f05d029cb3ab" path="/var/lib/kubelet/pods/d283d70b-0dbe-4059-aa3a-f05d029cb3ab/volumes" Jan 09 13:36:56 crc kubenswrapper[4919]: I0109 13:36:56.412157 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:36:56 crc kubenswrapper[4919]: I0109 13:36:56.466158 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zp794" Jan 09 13:37:21 crc kubenswrapper[4919]: I0109 13:37:21.246583 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:37:21 crc kubenswrapper[4919]: I0109 13:37:21.247163 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:37:21 crc kubenswrapper[4919]: I0109 13:37:21.247246 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:37:21 crc kubenswrapper[4919]: I0109 13:37:21.247930 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37d9c7803cd79faa7ac0a37f20abf614a5efbd31913cca12e52b150e758b14ec"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 13:37:21 crc kubenswrapper[4919]: I0109 13:37:21.248003 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://37d9c7803cd79faa7ac0a37f20abf614a5efbd31913cca12e52b150e758b14ec" gracePeriod=600 Jan 09 13:37:22 crc kubenswrapper[4919]: I0109 13:37:22.182924 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="37d9c7803cd79faa7ac0a37f20abf614a5efbd31913cca12e52b150e758b14ec" exitCode=0 Jan 09 13:37:22 crc kubenswrapper[4919]: I0109 13:37:22.183000 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"37d9c7803cd79faa7ac0a37f20abf614a5efbd31913cca12e52b150e758b14ec"} Jan 09 13:37:22 crc kubenswrapper[4919]: I0109 13:37:22.183280 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"51f2b467bad1d9860ef540627b99d2e5678ea709090f17043cdb577fdb4e1708"} Jan 09 13:37:22 crc kubenswrapper[4919]: I0109 13:37:22.183321 4919 scope.go:117] "RemoveContainer" containerID="5b529d70339efbd46e30e992ceb5afe2db97bdde02f50798c9eb61dfa23d7f7e" Jan 09 13:39:21 crc kubenswrapper[4919]: I0109 13:39:21.044345 4919 scope.go:117] "RemoveContainer" containerID="9726e9eee7703ac50b2c6cc82874afa5de3794a3663471f10d996033d6231e2f" Jan 09 13:39:21 crc kubenswrapper[4919]: I0109 13:39:21.061109 4919 scope.go:117] "RemoveContainer" containerID="2b5f9a0384810e48712eb27a6d7178a64c8a39901cb8674a7ea90dc51729cea8" Jan 09 13:39:21 crc kubenswrapper[4919]: I0109 13:39:21.247105 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:39:21 crc kubenswrapper[4919]: I0109 13:39:21.247160 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:39:51 crc kubenswrapper[4919]: I0109 13:39:51.246603 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:39:51 crc kubenswrapper[4919]: I0109 13:39:51.247195 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:40:21 crc kubenswrapper[4919]: I0109 13:40:21.095974 4919 scope.go:117] "RemoveContainer" containerID="d1d64207abd9195331feab345729908ba8fd3a4370f7ea74b73f339c6b065729" Jan 09 13:40:21 crc kubenswrapper[4919]: I0109 13:40:21.247002 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:40:21 crc kubenswrapper[4919]: I0109 13:40:21.247057 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:40:21 crc kubenswrapper[4919]: I0109 13:40:21.247104 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:40:21 crc kubenswrapper[4919]: I0109 13:40:21.247827 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51f2b467bad1d9860ef540627b99d2e5678ea709090f17043cdb577fdb4e1708"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 13:40:21 crc kubenswrapper[4919]: I0109 13:40:21.247884 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://51f2b467bad1d9860ef540627b99d2e5678ea709090f17043cdb577fdb4e1708" gracePeriod=600 Jan 09 13:40:22 crc kubenswrapper[4919]: I0109 13:40:22.245070 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="51f2b467bad1d9860ef540627b99d2e5678ea709090f17043cdb577fdb4e1708" exitCode=0 Jan 09 13:40:22 crc kubenswrapper[4919]: I0109 13:40:22.245155 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"51f2b467bad1d9860ef540627b99d2e5678ea709090f17043cdb577fdb4e1708"} Jan 09 13:40:22 crc kubenswrapper[4919]: I0109 13:40:22.245711 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"e3fae3f1f51df5d9026154c14d04831020e0e9d6f7bf4af54d35cedb600d3044"} Jan 09 13:40:22 crc kubenswrapper[4919]: I0109 13:40:22.245739 4919 scope.go:117] "RemoveContainer" containerID="37d9c7803cd79faa7ac0a37f20abf614a5efbd31913cca12e52b150e758b14ec" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.057899 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-29hn2"] Jan 09 13:41:38 crc kubenswrapper[4919]: E0109 13:41:38.058668 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d283d70b-0dbe-4059-aa3a-f05d029cb3ab" containerName="registry" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.058682 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d283d70b-0dbe-4059-aa3a-f05d029cb3ab" containerName="registry" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.058812 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="d283d70b-0dbe-4059-aa3a-f05d029cb3ab" containerName="registry" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.060445 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29hn2" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.065670 4919 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2l9qb" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.065867 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.066040 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.072573 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-29hn2"] Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.076988 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-ptg84"] Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.077863 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ptg84" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.081010 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pnfgs"] Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.082073 4919 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cjbgp" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.082518 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-pnfgs" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.085528 4919 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4vj2d" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.097327 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ptg84"] Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.102033 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pnfgs"] Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.158656 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bmmv\" (UniqueName: \"kubernetes.io/projected/64fd850e-9282-4070-8467-aa5b8c498787-kube-api-access-8bmmv\") pod \"cert-manager-858654f9db-ptg84\" (UID: \"64fd850e-9282-4070-8467-aa5b8c498787\") " pod="cert-manager/cert-manager-858654f9db-ptg84" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.159014 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dcds\" (UniqueName: \"kubernetes.io/projected/51952aec-f115-4d09-a7f4-56dcc9f6222c-kube-api-access-8dcds\") pod \"cert-manager-cainjector-cf98fcc89-29hn2\" (UID: \"51952aec-f115-4d09-a7f4-56dcc9f6222c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-29hn2" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.159081 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxt5g\" (UniqueName: \"kubernetes.io/projected/6afdfa72-d547-4051-9c95-fd83fd88ff93-kube-api-access-zxt5g\") pod \"cert-manager-webhook-687f57d79b-pnfgs\" (UID: \"6afdfa72-d547-4051-9c95-fd83fd88ff93\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pnfgs" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.260143 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bmmv\" (UniqueName: \"kubernetes.io/projected/64fd850e-9282-4070-8467-aa5b8c498787-kube-api-access-8bmmv\") pod \"cert-manager-858654f9db-ptg84\" (UID: \"64fd850e-9282-4070-8467-aa5b8c498787\") " pod="cert-manager/cert-manager-858654f9db-ptg84" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.260190 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcds\" (UniqueName: \"kubernetes.io/projected/51952aec-f115-4d09-a7f4-56dcc9f6222c-kube-api-access-8dcds\") pod \"cert-manager-cainjector-cf98fcc89-29hn2\" (UID: \"51952aec-f115-4d09-a7f4-56dcc9f6222c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-29hn2" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.260287 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxt5g\" (UniqueName: \"kubernetes.io/projected/6afdfa72-d547-4051-9c95-fd83fd88ff93-kube-api-access-zxt5g\") pod \"cert-manager-webhook-687f57d79b-pnfgs\" (UID: \"6afdfa72-d547-4051-9c95-fd83fd88ff93\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pnfgs" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.279965 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bmmv\" (UniqueName: \"kubernetes.io/projected/64fd850e-9282-4070-8467-aa5b8c498787-kube-api-access-8bmmv\") pod \"cert-manager-858654f9db-ptg84\" (UID: \"64fd850e-9282-4070-8467-aa5b8c498787\") " pod="cert-manager/cert-manager-858654f9db-ptg84" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.279973 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dcds\" (UniqueName: \"kubernetes.io/projected/51952aec-f115-4d09-a7f4-56dcc9f6222c-kube-api-access-8dcds\") pod \"cert-manager-cainjector-cf98fcc89-29hn2\" (UID: \"51952aec-f115-4d09-a7f4-56dcc9f6222c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-29hn2" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.280006 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxt5g\" (UniqueName: \"kubernetes.io/projected/6afdfa72-d547-4051-9c95-fd83fd88ff93-kube-api-access-zxt5g\") pod \"cert-manager-webhook-687f57d79b-pnfgs\" (UID: \"6afdfa72-d547-4051-9c95-fd83fd88ff93\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pnfgs" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.383626 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29hn2" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.405574 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ptg84" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.414743 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-pnfgs" Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.606571 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-29hn2"] Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.616473 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.653033 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29hn2" event={"ID":"51952aec-f115-4d09-a7f4-56dcc9f6222c","Type":"ContainerStarted","Data":"c22e6174cabf62a74faff8d8f7fd6975fc637972be1e8b3f7cbda1f54995ce38"} Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.845676 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pnfgs"] Jan 09 13:41:38 crc kubenswrapper[4919]: W0109 13:41:38.849434 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6afdfa72_d547_4051_9c95_fd83fd88ff93.slice/crio-c61c98387e87d5d6efbc36888cbaf7f4fd3ea2d1bb368fa37c1f77df7aefe78a WatchSource:0}: Error finding container c61c98387e87d5d6efbc36888cbaf7f4fd3ea2d1bb368fa37c1f77df7aefe78a: Status 404 returned error can't find the container with id c61c98387e87d5d6efbc36888cbaf7f4fd3ea2d1bb368fa37c1f77df7aefe78a Jan 09 13:41:38 crc kubenswrapper[4919]: I0109 13:41:38.850479 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ptg84"] Jan 09 13:41:38 crc kubenswrapper[4919]: W0109 13:41:38.856563 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64fd850e_9282_4070_8467_aa5b8c498787.slice/crio-9834957f23cde0bbee17fa61319417647c03627939b45d43592fa839399ad41b WatchSource:0}: Error finding container 9834957f23cde0bbee17fa61319417647c03627939b45d43592fa839399ad41b: Status 404 returned error can't find the container with id 9834957f23cde0bbee17fa61319417647c03627939b45d43592fa839399ad41b Jan 09 13:41:39 crc kubenswrapper[4919]: I0109 13:41:39.665274 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ptg84" event={"ID":"64fd850e-9282-4070-8467-aa5b8c498787","Type":"ContainerStarted","Data":"9834957f23cde0bbee17fa61319417647c03627939b45d43592fa839399ad41b"} Jan 09 13:41:39 crc kubenswrapper[4919]: I0109 13:41:39.667360 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-pnfgs" event={"ID":"6afdfa72-d547-4051-9c95-fd83fd88ff93","Type":"ContainerStarted","Data":"c61c98387e87d5d6efbc36888cbaf7f4fd3ea2d1bb368fa37c1f77df7aefe78a"} Jan 09 13:41:43 crc kubenswrapper[4919]: I0109 13:41:43.691515 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ptg84" event={"ID":"64fd850e-9282-4070-8467-aa5b8c498787","Type":"ContainerStarted","Data":"d45ff755ba6362b84859b40d668cc5c0bb9feabbd65d6e412d225cd38617fd82"} Jan 09 13:41:43 crc kubenswrapper[4919]: I0109 13:41:43.693119 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29hn2" event={"ID":"51952aec-f115-4d09-a7f4-56dcc9f6222c","Type":"ContainerStarted","Data":"04a214f3d41a890c549fff3c09360fb81e5080fc96154debef8a8e9ffc4bf587"} Jan 09 13:41:43 crc kubenswrapper[4919]: I0109 13:41:43.694531 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-pnfgs" event={"ID":"6afdfa72-d547-4051-9c95-fd83fd88ff93","Type":"ContainerStarted","Data":"b33dc23080db79075bbcfdb4d58740366ce34c7dbfa75a3b8d870e17ef9d4ee8"} Jan 09 13:41:43 crc kubenswrapper[4919]: I0109 13:41:43.694684 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-pnfgs" Jan 09 13:41:43 crc kubenswrapper[4919]: I0109 13:41:43.708126 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-ptg84" podStartSLOduration=1.422144018 podStartE2EDuration="5.708098639s" podCreationTimestamp="2026-01-09 13:41:38 +0000 UTC" firstStartedPulling="2026-01-09 13:41:38.857934779 +0000 UTC m=+678.405774229" lastFinishedPulling="2026-01-09 13:41:43.1438894 +0000 UTC m=+682.691728850" observedRunningTime="2026-01-09 13:41:43.707028533 +0000 UTC m=+683.254867983" watchObservedRunningTime="2026-01-09 13:41:43.708098639 +0000 UTC m=+683.255938099" Jan 09 13:41:43 crc kubenswrapper[4919]: I0109 13:41:43.729719 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-pnfgs" podStartSLOduration=1.5180539610000001 podStartE2EDuration="5.729692456s" podCreationTimestamp="2026-01-09 13:41:38 +0000 UTC" firstStartedPulling="2026-01-09 13:41:38.852596019 +0000 UTC m=+678.400435469" lastFinishedPulling="2026-01-09 13:41:43.064234524 +0000 UTC m=+682.612073964" observedRunningTime="2026-01-09 13:41:43.725298329 +0000 UTC m=+683.273137769" watchObservedRunningTime="2026-01-09 13:41:43.729692456 +0000 UTC m=+683.277531906" Jan 09 13:41:43 crc kubenswrapper[4919]: I0109 13:41:43.749664 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29hn2" podStartSLOduration=1.300092587 podStartE2EDuration="5.749629993s" podCreationTimestamp="2026-01-09 13:41:38 +0000 UTC" firstStartedPulling="2026-01-09 13:41:38.616116463 +0000 UTC m=+678.163955913" lastFinishedPulling="2026-01-09 13:41:43.065653869 +0000 UTC m=+682.613493319" observedRunningTime="2026-01-09 13:41:43.745353108 +0000 UTC m=+683.293192578" watchObservedRunningTime="2026-01-09 13:41:43.749629993 +0000 UTC m=+683.297469443" Jan 09 13:41:48 crc kubenswrapper[4919]: I0109 13:41:48.418030 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-pnfgs" Jan 09 13:42:06 crc kubenswrapper[4919]: I0109 13:42:06.711979 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w74hl"] Jan 09 13:42:06 crc kubenswrapper[4919]: I0109 13:42:06.712937 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovn-controller" containerID="cri-o://15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4" gracePeriod=30 Jan 09 13:42:06 crc kubenswrapper[4919]: I0109 13:42:06.713024 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17" gracePeriod=30 Jan 09 13:42:06 crc kubenswrapper[4919]: I0109 13:42:06.713028 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="nbdb" containerID="cri-o://ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0" gracePeriod=30 Jan 09 13:42:06 crc kubenswrapper[4919]: I0109 13:42:06.713081 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="kube-rbac-proxy-node" containerID="cri-o://4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05" gracePeriod=30 Jan 09 13:42:06 crc kubenswrapper[4919]: I0109 13:42:06.713100 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="northd" containerID="cri-o://ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e" gracePeriod=30 Jan 09 13:42:06 crc kubenswrapper[4919]: I0109 13:42:06.713135 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovn-acl-logging" containerID="cri-o://95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc" gracePeriod=30 Jan 09 13:42:06 crc kubenswrapper[4919]: I0109 13:42:06.713167 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="sbdb" containerID="cri-o://1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc" gracePeriod=30 Jan 09 13:42:06 crc kubenswrapper[4919]: I0109 13:42:06.751294 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" containerID="cri-o://112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b" gracePeriod=30 Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.810165 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/3.log" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.814244 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovn-acl-logging/0.log" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.814894 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovn-controller/0.log" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.815418 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.817469 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgw8v_11e19b4a-0888-460f-bf97-5dd0ddda6e8c/kube-multus/2.log" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.818282 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgw8v_11e19b4a-0888-460f-bf97-5dd0ddda6e8c/kube-multus/1.log" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.818333 4919 generic.go:334] "Generic (PLEG): container finished" podID="11e19b4a-0888-460f-bf97-5dd0ddda6e8c" containerID="d5dedf26e5ff4665f09eceaa03a030632058e239d6a30d55b68dc35f2529731a" exitCode=2 Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.818416 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgw8v" event={"ID":"11e19b4a-0888-460f-bf97-5dd0ddda6e8c","Type":"ContainerDied","Data":"d5dedf26e5ff4665f09eceaa03a030632058e239d6a30d55b68dc35f2529731a"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.818463 4919 scope.go:117] "RemoveContainer" containerID="6dd4aa1459db1d095dd8a4d538ce3dc77e934eaaa815c7b700de8ee6ae8cc25a" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.821467 4919 scope.go:117] "RemoveContainer" containerID="d5dedf26e5ff4665f09eceaa03a030632058e239d6a30d55b68dc35f2529731a" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.821853 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kgw8v_openshift-multus(11e19b4a-0888-460f-bf97-5dd0ddda6e8c)\"" pod="openshift-multus/multus-kgw8v" podUID="11e19b4a-0888-460f-bf97-5dd0ddda6e8c" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.822737 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovnkube-controller/3.log" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.825896 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovn-acl-logging/0.log" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.826523 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w74hl_4a11a9b6-2419-4f04-b35e-ba296d70b705/ovn-controller/0.log" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827103 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b" exitCode=0 Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827133 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc" exitCode=0 Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827142 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0" exitCode=0 Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827151 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e" exitCode=0 Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827161 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17" exitCode=0 Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827169 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827185 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827235 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827246 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827258 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827269 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827278 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827289 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827300 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827307 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827313 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827319 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827324 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827329 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827334 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827339 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827344 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827170 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05" exitCode=0 Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827360 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc" exitCode=143 Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827368 4919 generic.go:334] "Generic (PLEG): container finished" podID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerID="15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4" exitCode=143 Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827383 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827393 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827401 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827407 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827414 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827421 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827428 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827436 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827444 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827450 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827457 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827468 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827479 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827487 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827495 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827501 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827509 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827516 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827521 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827526 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827531 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827536 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827543 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w74hl" event={"ID":"4a11a9b6-2419-4f04-b35e-ba296d70b705","Type":"ContainerDied","Data":"dc66ccfb1667d3e0c668f7bdf2a6d268828f7f4ca9f23f61f44b5e91066afa4c"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827553 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827559 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827565 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827570 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827577 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827582 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827587 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827593 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827598 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.827603 4919 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f"} Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.867820 4919 scope.go:117] "RemoveContainer" containerID="112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.891471 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4mnrn"] Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.891831 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.891850 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.891863 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovn-acl-logging" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.891870 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovn-acl-logging" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.891922 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.891933 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.891947 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovn-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.891954 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovn-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.892017 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="kube-rbac-proxy-node" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892026 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="kube-rbac-proxy-node" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.892036 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892045 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892048 4919 scope.go:117] "RemoveContainer" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.892057 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="northd" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892194 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="northd" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.892233 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892244 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.892296 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="kube-rbac-proxy-ovn-metrics" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892306 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="kube-rbac-proxy-ovn-metrics" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.892332 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="sbdb" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892341 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="sbdb" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.892370 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="nbdb" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892378 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="nbdb" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.892393 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="kubecfg-setup" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892400 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="kubecfg-setup" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892678 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="sbdb" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892691 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="nbdb" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892702 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892709 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892715 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovn-acl-logging" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892723 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892730 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="kube-rbac-proxy-ovn-metrics" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892738 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovn-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892746 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="kube-rbac-proxy-node" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892754 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="northd" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892765 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: E0109 13:42:08.892875 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.892884 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.893007 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" containerName="ovnkube-controller" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.894880 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951095 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6jvn\" (UniqueName: \"kubernetes.io/projected/4a11a9b6-2419-4f04-b35e-ba296d70b705-kube-api-access-h6jvn\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951158 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovn-node-metrics-cert\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951197 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-node-log\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951252 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-var-lib-openvswitch\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951279 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-env-overrides\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951306 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-netd\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951326 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-log-socket\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951361 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951386 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-openvswitch\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951415 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-etc-openvswitch\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951427 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951458 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-config\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951577 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-bin\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951617 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-systemd\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951644 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-systemd-units\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951724 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-ovn-kubernetes\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951751 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-netns\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951800 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-script-lib\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951835 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-ovn\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951836 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951873 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-slash\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951892 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951900 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-kubelet\") pod \"4a11a9b6-2419-4f04-b35e-ba296d70b705\" (UID: \"4a11a9b6-2419-4f04-b35e-ba296d70b705\") " Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952541 4919 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952568 4919 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952581 4919 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951923 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951949 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-log-socket" (OuterVolumeSpecName: "log-socket") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951968 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.951985 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952003 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952015 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952032 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952033 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952632 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952781 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-slash" (OuterVolumeSpecName: "host-slash") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952805 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.952835 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.953165 4919 scope.go:117] "RemoveContainer" containerID="1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.953306 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.955486 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-node-log" (OuterVolumeSpecName: "node-log") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.961409 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a11a9b6-2419-4f04-b35e-ba296d70b705-kube-api-access-h6jvn" (OuterVolumeSpecName: "kube-api-access-h6jvn") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "kube-api-access-h6jvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.961994 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.968664 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4a11a9b6-2419-4f04-b35e-ba296d70b705" (UID: "4a11a9b6-2419-4f04-b35e-ba296d70b705"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:42:08 crc kubenswrapper[4919]: I0109 13:42:08.975799 4919 scope.go:117] "RemoveContainer" containerID="ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.001675 4919 scope.go:117] "RemoveContainer" containerID="ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.017491 4919 scope.go:117] "RemoveContainer" containerID="eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.033944 4919 scope.go:117] "RemoveContainer" containerID="4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053472 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-run-ovn-kubernetes\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053524 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-cni-netd\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053551 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-log-socket\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053579 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053605 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-run-ovn\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053619 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-run-netns\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053636 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-kubelet\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053654 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1723b89-94d5-42b4-a122-a4ec41e15ede-ovnkube-script-lib\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053671 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-slash\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053690 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-cni-bin\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053706 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-run-openvswitch\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053719 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1723b89-94d5-42b4-a122-a4ec41e15ede-ovnkube-config\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053768 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-run-systemd\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053785 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-var-lib-openvswitch\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053812 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1723b89-94d5-42b4-a122-a4ec41e15ede-ovn-node-metrics-cert\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053829 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1723b89-94d5-42b4-a122-a4ec41e15ede-env-overrides\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053851 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8jvp\" (UniqueName: \"kubernetes.io/projected/c1723b89-94d5-42b4-a122-a4ec41e15ede-kube-api-access-j8jvp\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053874 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-node-log\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053895 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-etc-openvswitch\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053912 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-systemd-units\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053953 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6jvn\" (UniqueName: \"kubernetes.io/projected/4a11a9b6-2419-4f04-b35e-ba296d70b705-kube-api-access-h6jvn\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053966 4919 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053979 4919 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-node-log\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053988 4919 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-log-socket\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.053997 4919 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054006 4919 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054015 4919 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054025 4919 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054033 4919 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054042 4919 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054050 4919 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054061 4919 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054069 4919 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054077 4919 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a11a9b6-2419-4f04-b35e-ba296d70b705-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054085 4919 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054092 4919 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-slash\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.054100 4919 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a11a9b6-2419-4f04-b35e-ba296d70b705-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.060641 4919 scope.go:117] "RemoveContainer" containerID="95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.074386 4919 scope.go:117] "RemoveContainer" containerID="15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.087806 4919 scope.go:117] "RemoveContainer" containerID="4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.105430 4919 scope.go:117] "RemoveContainer" containerID="112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b" Jan 09 13:42:09 crc kubenswrapper[4919]: E0109 13:42:09.105983 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b\": container with ID starting with 112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b not found: ID does not exist" containerID="112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.106024 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b"} err="failed to get container status \"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b\": rpc error: code = NotFound desc = could not find container \"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b\": container with ID starting with 112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.106072 4919 scope.go:117] "RemoveContainer" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:42:09 crc kubenswrapper[4919]: E0109 13:42:09.106490 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\": container with ID starting with af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9 not found: ID does not exist" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.106513 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9"} err="failed to get container status \"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\": rpc error: code = NotFound desc = could not find container \"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\": container with ID starting with af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.106528 4919 scope.go:117] "RemoveContainer" containerID="1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc" Jan 09 13:42:09 crc kubenswrapper[4919]: E0109 13:42:09.107062 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\": container with ID starting with 1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc not found: ID does not exist" containerID="1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.107086 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc"} err="failed to get container status \"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\": rpc error: code = NotFound desc = could not find container \"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\": container with ID starting with 1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.107102 4919 scope.go:117] "RemoveContainer" containerID="ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0" Jan 09 13:42:09 crc kubenswrapper[4919]: E0109 13:42:09.107498 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\": container with ID starting with ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0 not found: ID does not exist" containerID="ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.107526 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0"} err="failed to get container status \"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\": rpc error: code = NotFound desc = could not find container \"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\": container with ID starting with ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.107541 4919 scope.go:117] "RemoveContainer" containerID="ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e" Jan 09 13:42:09 crc kubenswrapper[4919]: E0109 13:42:09.107778 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\": container with ID starting with ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e not found: ID does not exist" containerID="ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.107809 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e"} err="failed to get container status \"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\": rpc error: code = NotFound desc = could not find container \"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\": container with ID starting with ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.107831 4919 scope.go:117] "RemoveContainer" containerID="eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17" Jan 09 13:42:09 crc kubenswrapper[4919]: E0109 13:42:09.108086 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\": container with ID starting with eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17 not found: ID does not exist" containerID="eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.108120 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17"} err="failed to get container status \"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\": rpc error: code = NotFound desc = could not find container \"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\": container with ID starting with eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.108138 4919 scope.go:117] "RemoveContainer" containerID="4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05" Jan 09 13:42:09 crc kubenswrapper[4919]: E0109 13:42:09.108500 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\": container with ID starting with 4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05 not found: ID does not exist" containerID="4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.108523 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05"} err="failed to get container status \"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\": rpc error: code = NotFound desc = could not find container \"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\": container with ID starting with 4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.108537 4919 scope.go:117] "RemoveContainer" containerID="95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc" Jan 09 13:42:09 crc kubenswrapper[4919]: E0109 13:42:09.108840 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\": container with ID starting with 95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc not found: ID does not exist" containerID="95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.108863 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc"} err="failed to get container status \"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\": rpc error: code = NotFound desc = could not find container \"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\": container with ID starting with 95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.108878 4919 scope.go:117] "RemoveContainer" containerID="15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4" Jan 09 13:42:09 crc kubenswrapper[4919]: E0109 13:42:09.109250 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\": container with ID starting with 15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4 not found: ID does not exist" containerID="15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.109288 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4"} err="failed to get container status \"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\": rpc error: code = NotFound desc = could not find container \"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\": container with ID starting with 15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.109305 4919 scope.go:117] "RemoveContainer" containerID="4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f" Jan 09 13:42:09 crc kubenswrapper[4919]: E0109 13:42:09.109573 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\": container with ID starting with 4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f not found: ID does not exist" containerID="4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.109602 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f"} err="failed to get container status \"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\": rpc error: code = NotFound desc = could not find container \"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\": container with ID starting with 4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.109618 4919 scope.go:117] "RemoveContainer" containerID="112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.109977 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b"} err="failed to get container status \"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b\": rpc error: code = NotFound desc = could not find container \"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b\": container with ID starting with 112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.110048 4919 scope.go:117] "RemoveContainer" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.110582 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9"} err="failed to get container status \"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\": rpc error: code = NotFound desc = could not find container \"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\": container with ID starting with af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.110659 4919 scope.go:117] "RemoveContainer" containerID="1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.111045 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc"} err="failed to get container status \"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\": rpc error: code = NotFound desc = could not find container \"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\": container with ID starting with 1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.111073 4919 scope.go:117] "RemoveContainer" containerID="ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.111367 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0"} err="failed to get container status \"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\": rpc error: code = NotFound desc = could not find container \"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\": container with ID starting with ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.111398 4919 scope.go:117] "RemoveContainer" containerID="ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.111606 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e"} err="failed to get container status \"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\": rpc error: code = NotFound desc = could not find container \"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\": container with ID starting with ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.111623 4919 scope.go:117] "RemoveContainer" containerID="eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.111961 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17"} err="failed to get container status \"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\": rpc error: code = NotFound desc = could not find container \"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\": container with ID starting with eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.111985 4919 scope.go:117] "RemoveContainer" containerID="4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.112260 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05"} err="failed to get container status \"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\": rpc error: code = NotFound desc = could not find container \"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\": container with ID starting with 4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.112283 4919 scope.go:117] "RemoveContainer" containerID="95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.112487 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc"} err="failed to get container status \"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\": rpc error: code = NotFound desc = could not find container \"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\": container with ID starting with 95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.112524 4919 scope.go:117] "RemoveContainer" containerID="15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.112768 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4"} err="failed to get container status \"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\": rpc error: code = NotFound desc = could not find container \"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\": container with ID starting with 15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.112806 4919 scope.go:117] "RemoveContainer" containerID="4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.113090 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f"} err="failed to get container status \"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\": rpc error: code = NotFound desc = could not find container \"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\": container with ID starting with 4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.113140 4919 scope.go:117] "RemoveContainer" containerID="112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.113376 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b"} err="failed to get container status \"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b\": rpc error: code = NotFound desc = could not find container \"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b\": container with ID starting with 112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.113392 4919 scope.go:117] "RemoveContainer" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.113608 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9"} err="failed to get container status \"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\": rpc error: code = NotFound desc = could not find container \"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\": container with ID starting with af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.113624 4919 scope.go:117] "RemoveContainer" containerID="1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.113884 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc"} err="failed to get container status \"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\": rpc error: code = NotFound desc = could not find container \"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\": container with ID starting with 1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.113902 4919 scope.go:117] "RemoveContainer" containerID="ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.114114 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0"} err="failed to get container status \"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\": rpc error: code = NotFound desc = could not find container \"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\": container with ID starting with ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.114139 4919 scope.go:117] "RemoveContainer" containerID="ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.114404 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e"} err="failed to get container status \"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\": rpc error: code = NotFound desc = could not find container \"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\": container with ID starting with ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.114422 4919 scope.go:117] "RemoveContainer" containerID="eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.114631 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17"} err="failed to get container status \"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\": rpc error: code = NotFound desc = could not find container \"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\": container with ID starting with eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.114690 4919 scope.go:117] "RemoveContainer" containerID="4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.115085 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05"} err="failed to get container status \"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\": rpc error: code = NotFound desc = could not find container \"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\": container with ID starting with 4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.115121 4919 scope.go:117] "RemoveContainer" containerID="95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.115340 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc"} err="failed to get container status \"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\": rpc error: code = NotFound desc = could not find container \"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\": container with ID starting with 95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.115360 4919 scope.go:117] "RemoveContainer" containerID="15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.115548 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4"} err="failed to get container status \"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\": rpc error: code = NotFound desc = could not find container \"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\": container with ID starting with 15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.115569 4919 scope.go:117] "RemoveContainer" containerID="4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.115927 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f"} err="failed to get container status \"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\": rpc error: code = NotFound desc = could not find container \"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\": container with ID starting with 4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.115973 4919 scope.go:117] "RemoveContainer" containerID="112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.116258 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b"} err="failed to get container status \"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b\": rpc error: code = NotFound desc = could not find container \"112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b\": container with ID starting with 112a16191d57c6ee6be6fca0acf118455b5cb7e5e70ae064c891221cc18e537b not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.116281 4919 scope.go:117] "RemoveContainer" containerID="af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.116516 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9"} err="failed to get container status \"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\": rpc error: code = NotFound desc = could not find container \"af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9\": container with ID starting with af9d1f7638ecbd19ef127f9dedcf9c618013f2e6cbd661173a0eead07c7023a9 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.116541 4919 scope.go:117] "RemoveContainer" containerID="1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.117012 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc"} err="failed to get container status \"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\": rpc error: code = NotFound desc = could not find container \"1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc\": container with ID starting with 1618a6491a924e7cd28340bea333585a7c7a634ca063183f21574ed23df002cc not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.117043 4919 scope.go:117] "RemoveContainer" containerID="ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.117307 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0"} err="failed to get container status \"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\": rpc error: code = NotFound desc = could not find container \"ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0\": container with ID starting with ec868806819a987b7e96dbe674677a084fd3c8bbb3bdae6dd584d62df43b78f0 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.117328 4919 scope.go:117] "RemoveContainer" containerID="ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.117705 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e"} err="failed to get container status \"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\": rpc error: code = NotFound desc = could not find container \"ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e\": container with ID starting with ac368c1c0799b3fe1830e01a310247ed45949ab217cf83d98b487a12e482157e not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.117748 4919 scope.go:117] "RemoveContainer" containerID="eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.118029 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17"} err="failed to get container status \"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\": rpc error: code = NotFound desc = could not find container \"eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17\": container with ID starting with eecea89afe25c537de66df075217db267c79f45aecda1f91183fad854aa80c17 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.118071 4919 scope.go:117] "RemoveContainer" containerID="4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.118329 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05"} err="failed to get container status \"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\": rpc error: code = NotFound desc = could not find container \"4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05\": container with ID starting with 4f97e00dbff0bc7856400366616b2dfea1022f2dc151e8f1d7cbc5d377639b05 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.118356 4919 scope.go:117] "RemoveContainer" containerID="95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.118583 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc"} err="failed to get container status \"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\": rpc error: code = NotFound desc = could not find container \"95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc\": container with ID starting with 95d6b21d85be97a07f83c996deca344aba3e2bd0b249d1ce02f15db105649bcc not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.118627 4919 scope.go:117] "RemoveContainer" containerID="15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.118891 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4"} err="failed to get container status \"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\": rpc error: code = NotFound desc = could not find container \"15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4\": container with ID starting with 15885cccea72f90cf3317dca9a65041611aa4b8de069779e15d43ad39112f1d4 not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.118921 4919 scope.go:117] "RemoveContainer" containerID="4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.119202 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f"} err="failed to get container status \"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\": rpc error: code = NotFound desc = could not find container \"4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f\": container with ID starting with 4f58b59f7d17b6da7ef8ab61df3c8b9944b72ff295f0852bc0c4e99cb864969f not found: ID does not exist" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.155836 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-cni-netd\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.155922 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-run-ovn-kubernetes\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.155965 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-run-ovn-kubernetes\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.155975 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-log-socket\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156031 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-log-socket\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.155928 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-cni-netd\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156124 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156348 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-run-ovn\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156396 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-run-ovn\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156326 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156429 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-run-netns\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156465 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-kubelet\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156529 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1723b89-94d5-42b4-a122-a4ec41e15ede-ovnkube-script-lib\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156535 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-run-netns\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156574 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-slash\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156601 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-kubelet\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156753 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-cni-bin\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156791 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-run-openvswitch\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156829 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1723b89-94d5-42b4-a122-a4ec41e15ede-ovnkube-config\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156867 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-cni-bin\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156882 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-host-slash\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.156982 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-run-openvswitch\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157041 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-run-systemd\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157007 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-run-systemd\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157103 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-var-lib-openvswitch\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157153 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1723b89-94d5-42b4-a122-a4ec41e15ede-env-overrides\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157170 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1723b89-94d5-42b4-a122-a4ec41e15ede-ovn-node-metrics-cert\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157199 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8jvp\" (UniqueName: \"kubernetes.io/projected/c1723b89-94d5-42b4-a122-a4ec41e15ede-kube-api-access-j8jvp\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157241 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-node-log\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157243 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-var-lib-openvswitch\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157271 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-systemd-units\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157288 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-etc-openvswitch\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157375 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1723b89-94d5-42b4-a122-a4ec41e15ede-ovnkube-script-lib\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157382 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-etc-openvswitch\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157415 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-node-log\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157436 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1723b89-94d5-42b4-a122-a4ec41e15ede-systemd-units\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.157659 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w74hl"] Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.158508 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1723b89-94d5-42b4-a122-a4ec41e15ede-env-overrides\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.158994 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1723b89-94d5-42b4-a122-a4ec41e15ede-ovnkube-config\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.165149 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1723b89-94d5-42b4-a122-a4ec41e15ede-ovn-node-metrics-cert\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.167420 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w74hl"] Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.181313 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8jvp\" (UniqueName: \"kubernetes.io/projected/c1723b89-94d5-42b4-a122-a4ec41e15ede-kube-api-access-j8jvp\") pod \"ovnkube-node-4mnrn\" (UID: \"c1723b89-94d5-42b4-a122-a4ec41e15ede\") " pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.247377 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.834529 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgw8v_11e19b4a-0888-460f-bf97-5dd0ddda6e8c/kube-multus/2.log" Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.836710 4919 generic.go:334] "Generic (PLEG): container finished" podID="c1723b89-94d5-42b4-a122-a4ec41e15ede" containerID="96886a23ea16d51451b4f88ac18a8f292782d07c17c852bd6e4dbf988b330c56" exitCode=0 Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.836812 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" event={"ID":"c1723b89-94d5-42b4-a122-a4ec41e15ede","Type":"ContainerDied","Data":"96886a23ea16d51451b4f88ac18a8f292782d07c17c852bd6e4dbf988b330c56"} Jan 09 13:42:09 crc kubenswrapper[4919]: I0109 13:42:09.836879 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" event={"ID":"c1723b89-94d5-42b4-a122-a4ec41e15ede","Type":"ContainerStarted","Data":"1e0a59d0fb6f94b874598077bbe93f1f014fd72d182a00cdd9863ca171b68b27"} Jan 09 13:42:10 crc kubenswrapper[4919]: I0109 13:42:10.759373 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a11a9b6-2419-4f04-b35e-ba296d70b705" path="/var/lib/kubelet/pods/4a11a9b6-2419-4f04-b35e-ba296d70b705/volumes" Jan 09 13:42:10 crc kubenswrapper[4919]: I0109 13:42:10.846293 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" event={"ID":"c1723b89-94d5-42b4-a122-a4ec41e15ede","Type":"ContainerStarted","Data":"a3058145e4ac7e10f8c771aba51b5d3d0c879b79021f6ddde5e54d9f79b65ed6"} Jan 09 13:42:10 crc kubenswrapper[4919]: I0109 13:42:10.846334 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" event={"ID":"c1723b89-94d5-42b4-a122-a4ec41e15ede","Type":"ContainerStarted","Data":"2db00e5917b24641336f9d781d201558f42318cfde6e94faa3ba23dc5581354b"} Jan 09 13:42:10 crc kubenswrapper[4919]: I0109 13:42:10.846346 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" event={"ID":"c1723b89-94d5-42b4-a122-a4ec41e15ede","Type":"ContainerStarted","Data":"0aa13bfd4bfa54789c9468eae6c0871ba6ac9365695f296e834484a6e8ca8a6f"} Jan 09 13:42:10 crc kubenswrapper[4919]: I0109 13:42:10.846356 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" event={"ID":"c1723b89-94d5-42b4-a122-a4ec41e15ede","Type":"ContainerStarted","Data":"8aaf305330f4ea7969b369ed2818f0e91227ec2516350b89db0dd83777e3ab41"} Jan 09 13:42:11 crc kubenswrapper[4919]: I0109 13:42:11.854909 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" event={"ID":"c1723b89-94d5-42b4-a122-a4ec41e15ede","Type":"ContainerStarted","Data":"37812618e29700af13da244369f56b89c6a1162feec77eea9184b752a173cd1f"} Jan 09 13:42:11 crc kubenswrapper[4919]: I0109 13:42:11.855198 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" event={"ID":"c1723b89-94d5-42b4-a122-a4ec41e15ede","Type":"ContainerStarted","Data":"dde884e4abb3515b1341206d7980dbc7ed4123c045f4db95a2c257f77d7b4a64"} Jan 09 13:42:13 crc kubenswrapper[4919]: I0109 13:42:13.868848 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" event={"ID":"c1723b89-94d5-42b4-a122-a4ec41e15ede","Type":"ContainerStarted","Data":"d42231f39aa8cafedd5e7d1e4c31242d87429a2356e8e4ceb565f5718220ca17"} Jan 09 13:42:16 crc kubenswrapper[4919]: I0109 13:42:16.889779 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" event={"ID":"c1723b89-94d5-42b4-a122-a4ec41e15ede","Type":"ContainerStarted","Data":"1cdc2b3316806158b2d814d531d61332ad55044cc5fb134e698c7dc65d5eddba"} Jan 09 13:42:16 crc kubenswrapper[4919]: I0109 13:42:16.890752 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:16 crc kubenswrapper[4919]: I0109 13:42:16.890775 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:16 crc kubenswrapper[4919]: I0109 13:42:16.890787 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:16 crc kubenswrapper[4919]: I0109 13:42:16.926703 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" podStartSLOduration=8.926685048 podStartE2EDuration="8.926685048s" podCreationTimestamp="2026-01-09 13:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:42:16.924171317 +0000 UTC m=+716.472010767" watchObservedRunningTime="2026-01-09 13:42:16.926685048 +0000 UTC m=+716.474524498" Jan 09 13:42:16 crc kubenswrapper[4919]: I0109 13:42:16.929959 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:16 crc kubenswrapper[4919]: I0109 13:42:16.931904 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:21 crc kubenswrapper[4919]: I0109 13:42:21.159483 4919 scope.go:117] "RemoveContainer" containerID="cf6aefc7e41298d621d204ac30af71319ec9db84476b17bff6ed734fcfafde69" Jan 09 13:42:21 crc kubenswrapper[4919]: I0109 13:42:21.184668 4919 scope.go:117] "RemoveContainer" containerID="45cbfde240935359fe78fd0c10e926dea75c3d73d6afe650ced3f387a066a32a" Jan 09 13:42:21 crc kubenswrapper[4919]: I0109 13:42:21.246938 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:42:21 crc kubenswrapper[4919]: I0109 13:42:21.247294 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:42:22 crc kubenswrapper[4919]: I0109 13:42:22.752164 4919 scope.go:117] "RemoveContainer" containerID="d5dedf26e5ff4665f09eceaa03a030632058e239d6a30d55b68dc35f2529731a" Jan 09 13:42:22 crc kubenswrapper[4919]: E0109 13:42:22.752804 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kgw8v_openshift-multus(11e19b4a-0888-460f-bf97-5dd0ddda6e8c)\"" pod="openshift-multus/multus-kgw8v" podUID="11e19b4a-0888-460f-bf97-5dd0ddda6e8c" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.148414 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5"] Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.150004 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.151983 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.160607 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5"] Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.215967 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.216041 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96zpq\" (UniqueName: \"kubernetes.io/projected/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-kube-api-access-96zpq\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.216203 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.317788 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.317873 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.317910 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96zpq\" (UniqueName: \"kubernetes.io/projected/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-kube-api-access-96zpq\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.318427 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.318459 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.336876 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96zpq\" (UniqueName: \"kubernetes.io/projected/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-kube-api-access-96zpq\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.463775 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: E0109 13:42:30.493418 4919 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace_b19ed9ae-a65d-4d84-ba74-e2055655c7b8_0(fcab5eefdc86e50155b042db124a79ac174acecf0dc56ef52932b50a9b9ff23f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 13:42:30 crc kubenswrapper[4919]: E0109 13:42:30.493506 4919 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace_b19ed9ae-a65d-4d84-ba74-e2055655c7b8_0(fcab5eefdc86e50155b042db124a79ac174acecf0dc56ef52932b50a9b9ff23f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: E0109 13:42:30.493535 4919 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace_b19ed9ae-a65d-4d84-ba74-e2055655c7b8_0(fcab5eefdc86e50155b042db124a79ac174acecf0dc56ef52932b50a9b9ff23f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: E0109 13:42:30.493593 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace(b19ed9ae-a65d-4d84-ba74-e2055655c7b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace(b19ed9ae-a65d-4d84-ba74-e2055655c7b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace_b19ed9ae-a65d-4d84-ba74-e2055655c7b8_0(fcab5eefdc86e50155b042db124a79ac174acecf0dc56ef52932b50a9b9ff23f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" podUID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.969075 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: I0109 13:42:30.969521 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: E0109 13:42:30.990852 4919 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace_b19ed9ae-a65d-4d84-ba74-e2055655c7b8_0(85e289706d2ca6b73338e5ad337e3a9f49bdfa9e8252aea6dc08ac97ceb84d3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 13:42:30 crc kubenswrapper[4919]: E0109 13:42:30.990941 4919 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace_b19ed9ae-a65d-4d84-ba74-e2055655c7b8_0(85e289706d2ca6b73338e5ad337e3a9f49bdfa9e8252aea6dc08ac97ceb84d3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: E0109 13:42:30.990964 4919 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace_b19ed9ae-a65d-4d84-ba74-e2055655c7b8_0(85e289706d2ca6b73338e5ad337e3a9f49bdfa9e8252aea6dc08ac97ceb84d3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:30 crc kubenswrapper[4919]: E0109 13:42:30.991021 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace(b19ed9ae-a65d-4d84-ba74-e2055655c7b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace(b19ed9ae-a65d-4d84-ba74-e2055655c7b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_openshift-marketplace_b19ed9ae-a65d-4d84-ba74-e2055655c7b8_0(85e289706d2ca6b73338e5ad337e3a9f49bdfa9e8252aea6dc08ac97ceb84d3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" podUID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" Jan 09 13:42:34 crc kubenswrapper[4919]: I0109 13:42:34.752480 4919 scope.go:117] "RemoveContainer" containerID="d5dedf26e5ff4665f09eceaa03a030632058e239d6a30d55b68dc35f2529731a" Jan 09 13:42:34 crc kubenswrapper[4919]: I0109 13:42:34.989914 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgw8v_11e19b4a-0888-460f-bf97-5dd0ddda6e8c/kube-multus/2.log" Jan 09 13:42:34 crc kubenswrapper[4919]: I0109 13:42:34.990225 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgw8v" event={"ID":"11e19b4a-0888-460f-bf97-5dd0ddda6e8c","Type":"ContainerStarted","Data":"4f7f3567d77f69fbf76d4cd395eefd8ca0c425cac9abdcecd2362ec56b9557da"} Jan 09 13:42:39 crc kubenswrapper[4919]: I0109 13:42:39.272927 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4mnrn" Jan 09 13:42:43 crc kubenswrapper[4919]: I0109 13:42:43.750861 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:43 crc kubenswrapper[4919]: I0109 13:42:43.751674 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:43 crc kubenswrapper[4919]: I0109 13:42:43.930663 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5"] Jan 09 13:42:44 crc kubenswrapper[4919]: I0109 13:42:44.033815 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" event={"ID":"b19ed9ae-a65d-4d84-ba74-e2055655c7b8","Type":"ContainerStarted","Data":"d2f5cc7302d3d0bce5946b6fbe4c7d0e7d82ac4e595f01b4a00f04f708f2f182"} Jan 09 13:42:45 crc kubenswrapper[4919]: I0109 13:42:45.042443 4919 generic.go:334] "Generic (PLEG): container finished" podID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" containerID="316cedf30c278d61817b3e7949a7597117aa3e7456bd77e00a69d0eec74392a4" exitCode=0 Jan 09 13:42:45 crc kubenswrapper[4919]: I0109 13:42:45.042492 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" event={"ID":"b19ed9ae-a65d-4d84-ba74-e2055655c7b8","Type":"ContainerDied","Data":"316cedf30c278d61817b3e7949a7597117aa3e7456bd77e00a69d0eec74392a4"} Jan 09 13:42:50 crc kubenswrapper[4919]: I0109 13:42:50.069203 4919 generic.go:334] "Generic (PLEG): container finished" podID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" containerID="f9575fc723d502fa4e3e5fc2324d4d4ef6a4c496dd16d388a768b997566bd0f1" exitCode=0 Jan 09 13:42:50 crc kubenswrapper[4919]: I0109 13:42:50.069309 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" event={"ID":"b19ed9ae-a65d-4d84-ba74-e2055655c7b8","Type":"ContainerDied","Data":"f9575fc723d502fa4e3e5fc2324d4d4ef6a4c496dd16d388a768b997566bd0f1"} Jan 09 13:42:51 crc kubenswrapper[4919]: I0109 13:42:51.078149 4919 generic.go:334] "Generic (PLEG): container finished" podID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" containerID="ac988ac6a5389c8ea81fc7e5fc7a917c83a360b5accb39f755d1b76683c5b70d" exitCode=0 Jan 09 13:42:51 crc kubenswrapper[4919]: I0109 13:42:51.078242 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" event={"ID":"b19ed9ae-a65d-4d84-ba74-e2055655c7b8","Type":"ContainerDied","Data":"ac988ac6a5389c8ea81fc7e5fc7a917c83a360b5accb39f755d1b76683c5b70d"} Jan 09 13:42:51 crc kubenswrapper[4919]: I0109 13:42:51.247069 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:42:51 crc kubenswrapper[4919]: I0109 13:42:51.247139 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:42:52 crc kubenswrapper[4919]: I0109 13:42:52.297947 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:52 crc kubenswrapper[4919]: I0109 13:42:52.387799 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-util\") pod \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " Jan 09 13:42:52 crc kubenswrapper[4919]: I0109 13:42:52.387886 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96zpq\" (UniqueName: \"kubernetes.io/projected/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-kube-api-access-96zpq\") pod \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " Jan 09 13:42:52 crc kubenswrapper[4919]: I0109 13:42:52.387971 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-bundle\") pod \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\" (UID: \"b19ed9ae-a65d-4d84-ba74-e2055655c7b8\") " Jan 09 13:42:52 crc kubenswrapper[4919]: I0109 13:42:52.389018 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-bundle" (OuterVolumeSpecName: "bundle") pod "b19ed9ae-a65d-4d84-ba74-e2055655c7b8" (UID: "b19ed9ae-a65d-4d84-ba74-e2055655c7b8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:42:52 crc kubenswrapper[4919]: I0109 13:42:52.393684 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-kube-api-access-96zpq" (OuterVolumeSpecName: "kube-api-access-96zpq") pod "b19ed9ae-a65d-4d84-ba74-e2055655c7b8" (UID: "b19ed9ae-a65d-4d84-ba74-e2055655c7b8"). InnerVolumeSpecName "kube-api-access-96zpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:42:52 crc kubenswrapper[4919]: I0109 13:42:52.399602 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-util" (OuterVolumeSpecName: "util") pod "b19ed9ae-a65d-4d84-ba74-e2055655c7b8" (UID: "b19ed9ae-a65d-4d84-ba74-e2055655c7b8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:42:52 crc kubenswrapper[4919]: I0109 13:42:52.489422 4919 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:52 crc kubenswrapper[4919]: I0109 13:42:52.489464 4919 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-util\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:52 crc kubenswrapper[4919]: I0109 13:42:52.489477 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96zpq\" (UniqueName: \"kubernetes.io/projected/b19ed9ae-a65d-4d84-ba74-e2055655c7b8-kube-api-access-96zpq\") on node \"crc\" DevicePath \"\"" Jan 09 13:42:53 crc kubenswrapper[4919]: I0109 13:42:53.089598 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" event={"ID":"b19ed9ae-a65d-4d84-ba74-e2055655c7b8","Type":"ContainerDied","Data":"d2f5cc7302d3d0bce5946b6fbe4c7d0e7d82ac4e595f01b4a00f04f708f2f182"} Jan 09 13:42:53 crc kubenswrapper[4919]: I0109 13:42:53.089657 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2f5cc7302d3d0bce5946b6fbe4c7d0e7d82ac4e595f01b4a00f04f708f2f182" Jan 09 13:42:53 crc kubenswrapper[4919]: I0109 13:42:53.089700 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5" Jan 09 13:42:55 crc kubenswrapper[4919]: I0109 13:42:55.167456 4919 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.809140 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-rqxgb"] Jan 09 13:42:56 crc kubenswrapper[4919]: E0109 13:42:56.809464 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" containerName="pull" Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.809480 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" containerName="pull" Jan 09 13:42:56 crc kubenswrapper[4919]: E0109 13:42:56.809491 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" containerName="util" Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.809500 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" containerName="util" Jan 09 13:42:56 crc kubenswrapper[4919]: E0109 13:42:56.809515 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" containerName="extract" Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.809524 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" containerName="extract" Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.809649 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19ed9ae-a65d-4d84-ba74-e2055655c7b8" containerName="extract" Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.810131 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-rqxgb" Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.812690 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.812924 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-6b5cs" Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.814117 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.821112 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-rqxgb"] Jan 09 13:42:56 crc kubenswrapper[4919]: I0109 13:42:56.947518 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtcl9\" (UniqueName: \"kubernetes.io/projected/feaf998d-058f-4630-84eb-a1e5692b6c6b-kube-api-access-mtcl9\") pod \"nmstate-operator-6769fb99d-rqxgb\" (UID: \"feaf998d-058f-4630-84eb-a1e5692b6c6b\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-rqxgb" Jan 09 13:42:57 crc kubenswrapper[4919]: I0109 13:42:57.048867 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtcl9\" (UniqueName: \"kubernetes.io/projected/feaf998d-058f-4630-84eb-a1e5692b6c6b-kube-api-access-mtcl9\") pod \"nmstate-operator-6769fb99d-rqxgb\" (UID: \"feaf998d-058f-4630-84eb-a1e5692b6c6b\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-rqxgb" Jan 09 13:42:57 crc kubenswrapper[4919]: I0109 13:42:57.069120 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtcl9\" (UniqueName: \"kubernetes.io/projected/feaf998d-058f-4630-84eb-a1e5692b6c6b-kube-api-access-mtcl9\") pod \"nmstate-operator-6769fb99d-rqxgb\" (UID: \"feaf998d-058f-4630-84eb-a1e5692b6c6b\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-rqxgb" Jan 09 13:42:57 crc kubenswrapper[4919]: I0109 13:42:57.124587 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-rqxgb" Jan 09 13:42:57 crc kubenswrapper[4919]: I0109 13:42:57.340810 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-rqxgb"] Jan 09 13:42:57 crc kubenswrapper[4919]: W0109 13:42:57.347191 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeaf998d_058f_4630_84eb_a1e5692b6c6b.slice/crio-39de467d3f246ab46835d810a8804641de11e4045e104b070170d8299dcdbe7a WatchSource:0}: Error finding container 39de467d3f246ab46835d810a8804641de11e4045e104b070170d8299dcdbe7a: Status 404 returned error can't find the container with id 39de467d3f246ab46835d810a8804641de11e4045e104b070170d8299dcdbe7a Jan 09 13:42:58 crc kubenswrapper[4919]: I0109 13:42:58.115639 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-rqxgb" event={"ID":"feaf998d-058f-4630-84eb-a1e5692b6c6b","Type":"ContainerStarted","Data":"39de467d3f246ab46835d810a8804641de11e4045e104b070170d8299dcdbe7a"} Jan 09 13:43:01 crc kubenswrapper[4919]: I0109 13:43:01.132867 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-rqxgb" event={"ID":"feaf998d-058f-4630-84eb-a1e5692b6c6b","Type":"ContainerStarted","Data":"294b1e2274d78973e1f588895d824465d39ae7926c9a48a42076116ba037918d"} Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.037827 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-rqxgb" podStartSLOduration=2.949017125 podStartE2EDuration="6.037805241s" podCreationTimestamp="2026-01-09 13:42:56 +0000 UTC" firstStartedPulling="2026-01-09 13:42:57.349562679 +0000 UTC m=+756.897402129" lastFinishedPulling="2026-01-09 13:43:00.438350805 +0000 UTC m=+759.986190245" observedRunningTime="2026-01-09 13:43:01.156354642 +0000 UTC m=+760.704194092" watchObservedRunningTime="2026-01-09 13:43:02.037805241 +0000 UTC m=+761.585644691" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.042593 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5"] Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.043659 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.045851 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-t96bq" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.060968 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-v8957"] Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.063141 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.065799 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.067129 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5"] Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.071797 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9wzzm"] Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.072751 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.096763 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-v8957"] Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.120073 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nwwm\" (UniqueName: \"kubernetes.io/projected/91ddb4d0-422b-47f1-9279-fd2bef6bcd19-kube-api-access-8nwwm\") pod \"nmstate-metrics-7f7f7578db-hr7w5\" (UID: \"91ddb4d0-422b-47f1-9279-fd2bef6bcd19\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.120146 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smjc5\" (UniqueName: \"kubernetes.io/projected/5be7743f-eb29-453e-a4cb-58c25d8d24bd-kube-api-access-smjc5\") pod \"nmstate-webhook-f8fb84555-v8957\" (UID: \"5be7743f-eb29-453e-a4cb-58c25d8d24bd\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.120205 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5be7743f-eb29-453e-a4cb-58c25d8d24bd-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-v8957\" (UID: \"5be7743f-eb29-453e-a4cb-58c25d8d24bd\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.192938 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh"] Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.193619 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.196225 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.196260 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.196343 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5gz5g" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.203430 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh"] Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.220874 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nwwm\" (UniqueName: \"kubernetes.io/projected/91ddb4d0-422b-47f1-9279-fd2bef6bcd19-kube-api-access-8nwwm\") pod \"nmstate-metrics-7f7f7578db-hr7w5\" (UID: \"91ddb4d0-422b-47f1-9279-fd2bef6bcd19\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.220926 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smjc5\" (UniqueName: \"kubernetes.io/projected/5be7743f-eb29-453e-a4cb-58c25d8d24bd-kube-api-access-smjc5\") pod \"nmstate-webhook-f8fb84555-v8957\" (UID: \"5be7743f-eb29-453e-a4cb-58c25d8d24bd\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.220952 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9dd4fea4-6753-4012-a325-c7065f93a092-ovs-socket\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.220979 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9dd4fea4-6753-4012-a325-c7065f93a092-nmstate-lock\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.220994 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9dd4fea4-6753-4012-a325-c7065f93a092-dbus-socket\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.221014 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5be7743f-eb29-453e-a4cb-58c25d8d24bd-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-v8957\" (UID: \"5be7743f-eb29-453e-a4cb-58c25d8d24bd\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.221072 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8vmg\" (UniqueName: \"kubernetes.io/projected/9dd4fea4-6753-4012-a325-c7065f93a092-kube-api-access-g8vmg\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: E0109 13:43:02.221093 4919 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 09 13:43:02 crc kubenswrapper[4919]: E0109 13:43:02.221136 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5be7743f-eb29-453e-a4cb-58c25d8d24bd-tls-key-pair podName:5be7743f-eb29-453e-a4cb-58c25d8d24bd nodeName:}" failed. No retries permitted until 2026-01-09 13:43:02.721117808 +0000 UTC m=+762.268957258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5be7743f-eb29-453e-a4cb-58c25d8d24bd-tls-key-pair") pod "nmstate-webhook-f8fb84555-v8957" (UID: "5be7743f-eb29-453e-a4cb-58c25d8d24bd") : secret "openshift-nmstate-webhook" not found Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.241246 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nwwm\" (UniqueName: \"kubernetes.io/projected/91ddb4d0-422b-47f1-9279-fd2bef6bcd19-kube-api-access-8nwwm\") pod \"nmstate-metrics-7f7f7578db-hr7w5\" (UID: \"91ddb4d0-422b-47f1-9279-fd2bef6bcd19\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.241771 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smjc5\" (UniqueName: \"kubernetes.io/projected/5be7743f-eb29-453e-a4cb-58c25d8d24bd-kube-api-access-smjc5\") pod \"nmstate-webhook-f8fb84555-v8957\" (UID: \"5be7743f-eb29-453e-a4cb-58c25d8d24bd\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.321958 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9dd4fea4-6753-4012-a325-c7065f93a092-nmstate-lock\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.322002 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0964f707-3143-4f9c-a31c-ce8f14e1fd2f-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-vh2fh\" (UID: \"0964f707-3143-4f9c-a31c-ce8f14e1fd2f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.322029 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9dd4fea4-6753-4012-a325-c7065f93a092-dbus-socket\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.322077 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8vmg\" (UniqueName: \"kubernetes.io/projected/9dd4fea4-6753-4012-a325-c7065f93a092-kube-api-access-g8vmg\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.322102 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47qf6\" (UniqueName: \"kubernetes.io/projected/0964f707-3143-4f9c-a31c-ce8f14e1fd2f-kube-api-access-47qf6\") pod \"nmstate-console-plugin-6ff7998486-vh2fh\" (UID: \"0964f707-3143-4f9c-a31c-ce8f14e1fd2f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.322137 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0964f707-3143-4f9c-a31c-ce8f14e1fd2f-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-vh2fh\" (UID: \"0964f707-3143-4f9c-a31c-ce8f14e1fd2f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.322199 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9dd4fea4-6753-4012-a325-c7065f93a092-ovs-socket\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.322283 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9dd4fea4-6753-4012-a325-c7065f93a092-ovs-socket\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.322318 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9dd4fea4-6753-4012-a325-c7065f93a092-nmstate-lock\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.322535 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9dd4fea4-6753-4012-a325-c7065f93a092-dbus-socket\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.342801 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8vmg\" (UniqueName: \"kubernetes.io/projected/9dd4fea4-6753-4012-a325-c7065f93a092-kube-api-access-g8vmg\") pod \"nmstate-handler-9wzzm\" (UID: \"9dd4fea4-6753-4012-a325-c7065f93a092\") " pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.360650 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.369692 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c7dc7654d-7pw56"] Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.370649 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.384659 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c7dc7654d-7pw56"] Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.395032 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.423664 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0964f707-3143-4f9c-a31c-ce8f14e1fd2f-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-vh2fh\" (UID: \"0964f707-3143-4f9c-a31c-ce8f14e1fd2f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.424378 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-console-config\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.424457 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d47cba8-be12-4af4-ae0a-80a18dc64af7-console-serving-cert\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.425817 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqfw\" (UniqueName: \"kubernetes.io/projected/6d47cba8-be12-4af4-ae0a-80a18dc64af7-kube-api-access-zpqfw\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.425906 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d47cba8-be12-4af4-ae0a-80a18dc64af7-console-oauth-config\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.425924 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-service-ca\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.425950 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-oauth-serving-cert\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.425992 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-trusted-ca-bundle\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.426046 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0964f707-3143-4f9c-a31c-ce8f14e1fd2f-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-vh2fh\" (UID: \"0964f707-3143-4f9c-a31c-ce8f14e1fd2f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.426123 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47qf6\" (UniqueName: \"kubernetes.io/projected/0964f707-3143-4f9c-a31c-ce8f14e1fd2f-kube-api-access-47qf6\") pod \"nmstate-console-plugin-6ff7998486-vh2fh\" (UID: \"0964f707-3143-4f9c-a31c-ce8f14e1fd2f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.427882 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0964f707-3143-4f9c-a31c-ce8f14e1fd2f-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-vh2fh\" (UID: \"0964f707-3143-4f9c-a31c-ce8f14e1fd2f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.430059 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0964f707-3143-4f9c-a31c-ce8f14e1fd2f-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-vh2fh\" (UID: \"0964f707-3143-4f9c-a31c-ce8f14e1fd2f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: W0109 13:43:02.433738 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dd4fea4_6753_4012_a325_c7065f93a092.slice/crio-8d624d41e9dc3638dba15a39f6efc918a3c91eba8b7dc98b86bc3f1a66e33e1b WatchSource:0}: Error finding container 8d624d41e9dc3638dba15a39f6efc918a3c91eba8b7dc98b86bc3f1a66e33e1b: Status 404 returned error can't find the container with id 8d624d41e9dc3638dba15a39f6efc918a3c91eba8b7dc98b86bc3f1a66e33e1b Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.443111 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47qf6\" (UniqueName: \"kubernetes.io/projected/0964f707-3143-4f9c-a31c-ce8f14e1fd2f-kube-api-access-47qf6\") pod \"nmstate-console-plugin-6ff7998486-vh2fh\" (UID: \"0964f707-3143-4f9c-a31c-ce8f14e1fd2f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.505340 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.527818 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-trusted-ca-bundle\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.527907 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-console-config\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.527931 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d47cba8-be12-4af4-ae0a-80a18dc64af7-console-serving-cert\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.527951 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqfw\" (UniqueName: \"kubernetes.io/projected/6d47cba8-be12-4af4-ae0a-80a18dc64af7-kube-api-access-zpqfw\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.527982 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d47cba8-be12-4af4-ae0a-80a18dc64af7-console-oauth-config\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.527997 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-service-ca\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.528012 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-oauth-serving-cert\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.529664 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-console-config\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.530159 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-oauth-serving-cert\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.530219 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-service-ca\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.530345 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d47cba8-be12-4af4-ae0a-80a18dc64af7-trusted-ca-bundle\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.534077 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d47cba8-be12-4af4-ae0a-80a18dc64af7-console-oauth-config\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.534417 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d47cba8-be12-4af4-ae0a-80a18dc64af7-console-serving-cert\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.546667 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqfw\" (UniqueName: \"kubernetes.io/projected/6d47cba8-be12-4af4-ae0a-80a18dc64af7-kube-api-access-zpqfw\") pod \"console-7c7dc7654d-7pw56\" (UID: \"6d47cba8-be12-4af4-ae0a-80a18dc64af7\") " pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.588598 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5"] Jan 09 13:43:02 crc kubenswrapper[4919]: W0109 13:43:02.593632 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91ddb4d0_422b_47f1_9279_fd2bef6bcd19.slice/crio-efc3977976c4f1e8a58aeb741510f9900df649165e9f70b80a115232c8f50b8b WatchSource:0}: Error finding container efc3977976c4f1e8a58aeb741510f9900df649165e9f70b80a115232c8f50b8b: Status 404 returned error can't find the container with id efc3977976c4f1e8a58aeb741510f9900df649165e9f70b80a115232c8f50b8b Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.694392 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh"] Jan 09 13:43:02 crc kubenswrapper[4919]: W0109 13:43:02.701264 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0964f707_3143_4f9c_a31c_ce8f14e1fd2f.slice/crio-0258d1f1ceac20cd6b76a91657f38536a23435d9f237f3e91cd64f4621ee05e8 WatchSource:0}: Error finding container 0258d1f1ceac20cd6b76a91657f38536a23435d9f237f3e91cd64f4621ee05e8: Status 404 returned error can't find the container with id 0258d1f1ceac20cd6b76a91657f38536a23435d9f237f3e91cd64f4621ee05e8 Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.730731 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5be7743f-eb29-453e-a4cb-58c25d8d24bd-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-v8957\" (UID: \"5be7743f-eb29-453e-a4cb-58c25d8d24bd\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.732732 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.734599 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5be7743f-eb29-453e-a4cb-58c25d8d24bd-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-v8957\" (UID: \"5be7743f-eb29-453e-a4cb-58c25d8d24bd\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.920311 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c7dc7654d-7pw56"] Jan 09 13:43:02 crc kubenswrapper[4919]: W0109 13:43:02.924557 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d47cba8_be12_4af4_ae0a_80a18dc64af7.slice/crio-25cea2ab05015c5506985af00b6b884cde06b392f4b47242ac23d94507822775 WatchSource:0}: Error finding container 25cea2ab05015c5506985af00b6b884cde06b392f4b47242ac23d94507822775: Status 404 returned error can't find the container with id 25cea2ab05015c5506985af00b6b884cde06b392f4b47242ac23d94507822775 Jan 09 13:43:02 crc kubenswrapper[4919]: I0109 13:43:02.988613 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:03 crc kubenswrapper[4919]: I0109 13:43:03.147466 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" event={"ID":"0964f707-3143-4f9c-a31c-ce8f14e1fd2f","Type":"ContainerStarted","Data":"0258d1f1ceac20cd6b76a91657f38536a23435d9f237f3e91cd64f4621ee05e8"} Jan 09 13:43:03 crc kubenswrapper[4919]: I0109 13:43:03.150446 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5" event={"ID":"91ddb4d0-422b-47f1-9279-fd2bef6bcd19","Type":"ContainerStarted","Data":"efc3977976c4f1e8a58aeb741510f9900df649165e9f70b80a115232c8f50b8b"} Jan 09 13:43:03 crc kubenswrapper[4919]: I0109 13:43:03.151476 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c7dc7654d-7pw56" event={"ID":"6d47cba8-be12-4af4-ae0a-80a18dc64af7","Type":"ContainerStarted","Data":"25cea2ab05015c5506985af00b6b884cde06b392f4b47242ac23d94507822775"} Jan 09 13:43:03 crc kubenswrapper[4919]: I0109 13:43:03.153608 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9wzzm" event={"ID":"9dd4fea4-6753-4012-a325-c7065f93a092","Type":"ContainerStarted","Data":"8d624d41e9dc3638dba15a39f6efc918a3c91eba8b7dc98b86bc3f1a66e33e1b"} Jan 09 13:43:03 crc kubenswrapper[4919]: I0109 13:43:03.155703 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-v8957"] Jan 09 13:43:03 crc kubenswrapper[4919]: W0109 13:43:03.159872 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5be7743f_eb29_453e_a4cb_58c25d8d24bd.slice/crio-d01439b42d7eae1c426ef3688f68205c5951a198cdf2bf66dac987e3ea6d234d WatchSource:0}: Error finding container d01439b42d7eae1c426ef3688f68205c5951a198cdf2bf66dac987e3ea6d234d: Status 404 returned error can't find the container with id d01439b42d7eae1c426ef3688f68205c5951a198cdf2bf66dac987e3ea6d234d Jan 09 13:43:04 crc kubenswrapper[4919]: I0109 13:43:04.159504 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" event={"ID":"5be7743f-eb29-453e-a4cb-58c25d8d24bd","Type":"ContainerStarted","Data":"d01439b42d7eae1c426ef3688f68205c5951a198cdf2bf66dac987e3ea6d234d"} Jan 09 13:43:04 crc kubenswrapper[4919]: I0109 13:43:04.161011 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c7dc7654d-7pw56" event={"ID":"6d47cba8-be12-4af4-ae0a-80a18dc64af7","Type":"ContainerStarted","Data":"707afaa1653a734554ea4e6f78e6b75e1a6da4c3d917efb4eafa861749dab675"} Jan 09 13:43:04 crc kubenswrapper[4919]: I0109 13:43:04.186340 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c7dc7654d-7pw56" podStartSLOduration=2.1863226080000002 podStartE2EDuration="2.186322608s" podCreationTimestamp="2026-01-09 13:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:43:04.183509089 +0000 UTC m=+763.731348549" watchObservedRunningTime="2026-01-09 13:43:04.186322608 +0000 UTC m=+763.734162058" Jan 09 13:43:07 crc kubenswrapper[4919]: I0109 13:43:07.255822 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" event={"ID":"5be7743f-eb29-453e-a4cb-58c25d8d24bd","Type":"ContainerStarted","Data":"7f13cc5c9b4caeb6daa2f5e87c6e312eb63c0c38197c799a16fae864012c7ce8"} Jan 09 13:43:07 crc kubenswrapper[4919]: I0109 13:43:07.256232 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:07 crc kubenswrapper[4919]: I0109 13:43:07.258965 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" event={"ID":"0964f707-3143-4f9c-a31c-ce8f14e1fd2f","Type":"ContainerStarted","Data":"73c6926fce76ef62732765c01caa1e4c888dfc0a0a57cfa82ebdf553fbe05b0e"} Jan 09 13:43:07 crc kubenswrapper[4919]: I0109 13:43:07.260659 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5" event={"ID":"91ddb4d0-422b-47f1-9279-fd2bef6bcd19","Type":"ContainerStarted","Data":"f5f98568ef589e43986652d65186469d3aa81d3a26b88d2b0a242c05a2b13cc6"} Jan 09 13:43:07 crc kubenswrapper[4919]: I0109 13:43:07.273486 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" podStartSLOduration=1.410569097 podStartE2EDuration="5.273474115s" podCreationTimestamp="2026-01-09 13:43:02 +0000 UTC" firstStartedPulling="2026-01-09 13:43:03.161798997 +0000 UTC m=+762.709638447" lastFinishedPulling="2026-01-09 13:43:07.024704015 +0000 UTC m=+766.572543465" observedRunningTime="2026-01-09 13:43:07.268827311 +0000 UTC m=+766.816666751" watchObservedRunningTime="2026-01-09 13:43:07.273474115 +0000 UTC m=+766.821313565" Jan 09 13:43:07 crc kubenswrapper[4919]: I0109 13:43:07.286480 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vh2fh" podStartSLOduration=0.983499423 podStartE2EDuration="5.286457663s" podCreationTimestamp="2026-01-09 13:43:02 +0000 UTC" firstStartedPulling="2026-01-09 13:43:02.703066917 +0000 UTC m=+762.250906367" lastFinishedPulling="2026-01-09 13:43:07.006025167 +0000 UTC m=+766.553864607" observedRunningTime="2026-01-09 13:43:07.283999792 +0000 UTC m=+766.831839242" watchObservedRunningTime="2026-01-09 13:43:07.286457663 +0000 UTC m=+766.834297123" Jan 09 13:43:08 crc kubenswrapper[4919]: I0109 13:43:08.268434 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9wzzm" event={"ID":"9dd4fea4-6753-4012-a325-c7065f93a092","Type":"ContainerStarted","Data":"6f150320c413d05fdde1665356962a050ef425e61603a44f9870becd99fd4b06"} Jan 09 13:43:08 crc kubenswrapper[4919]: I0109 13:43:08.290811 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9wzzm" podStartSLOduration=1.705185899 podStartE2EDuration="6.290796229s" podCreationTimestamp="2026-01-09 13:43:02 +0000 UTC" firstStartedPulling="2026-01-09 13:43:02.437238769 +0000 UTC m=+761.985078219" lastFinishedPulling="2026-01-09 13:43:07.022849099 +0000 UTC m=+766.570688549" observedRunningTime="2026-01-09 13:43:08.287532099 +0000 UTC m=+767.835371549" watchObservedRunningTime="2026-01-09 13:43:08.290796229 +0000 UTC m=+767.838635679" Jan 09 13:43:09 crc kubenswrapper[4919]: I0109 13:43:09.273912 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:10 crc kubenswrapper[4919]: I0109 13:43:10.280279 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5" event={"ID":"91ddb4d0-422b-47f1-9279-fd2bef6bcd19","Type":"ContainerStarted","Data":"15137ecdc8fe7bbab9d7eb24d9dd70bc313b7392e6f85332c206a46e86da9b24"} Jan 09 13:43:10 crc kubenswrapper[4919]: I0109 13:43:10.295103 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hr7w5" podStartSLOduration=1.297672624 podStartE2EDuration="8.295082916s" podCreationTimestamp="2026-01-09 13:43:02 +0000 UTC" firstStartedPulling="2026-01-09 13:43:02.596768845 +0000 UTC m=+762.144608295" lastFinishedPulling="2026-01-09 13:43:09.594179137 +0000 UTC m=+769.142018587" observedRunningTime="2026-01-09 13:43:10.293307432 +0000 UTC m=+769.841146892" watchObservedRunningTime="2026-01-09 13:43:10.295082916 +0000 UTC m=+769.842922546" Jan 09 13:43:12 crc kubenswrapper[4919]: I0109 13:43:12.424711 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9wzzm" Jan 09 13:43:12 crc kubenswrapper[4919]: I0109 13:43:12.733454 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:12 crc kubenswrapper[4919]: I0109 13:43:12.733783 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:12 crc kubenswrapper[4919]: I0109 13:43:12.741152 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:13 crc kubenswrapper[4919]: I0109 13:43:13.304064 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c7dc7654d-7pw56" Jan 09 13:43:13 crc kubenswrapper[4919]: I0109 13:43:13.351030 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bffts"] Jan 09 13:43:21 crc kubenswrapper[4919]: I0109 13:43:21.247565 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:43:21 crc kubenswrapper[4919]: I0109 13:43:21.249041 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:43:21 crc kubenswrapper[4919]: I0109 13:43:21.249138 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:43:21 crc kubenswrapper[4919]: I0109 13:43:21.250050 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3fae3f1f51df5d9026154c14d04831020e0e9d6f7bf4af54d35cedb600d3044"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 13:43:21 crc kubenswrapper[4919]: I0109 13:43:21.250134 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://e3fae3f1f51df5d9026154c14d04831020e0e9d6f7bf4af54d35cedb600d3044" gracePeriod=600 Jan 09 13:43:22 crc kubenswrapper[4919]: I0109 13:43:22.349342 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="e3fae3f1f51df5d9026154c14d04831020e0e9d6f7bf4af54d35cedb600d3044" exitCode=0 Jan 09 13:43:22 crc kubenswrapper[4919]: I0109 13:43:22.349412 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"e3fae3f1f51df5d9026154c14d04831020e0e9d6f7bf4af54d35cedb600d3044"} Jan 09 13:43:22 crc kubenswrapper[4919]: I0109 13:43:22.349803 4919 scope.go:117] "RemoveContainer" containerID="51f2b467bad1d9860ef540627b99d2e5678ea709090f17043cdb577fdb4e1708" Jan 09 13:43:22 crc kubenswrapper[4919]: I0109 13:43:22.995001 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-v8957" Jan 09 13:43:24 crc kubenswrapper[4919]: I0109 13:43:24.370370 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"13e0d2bed4a1518fec6fb07c1bdfa49ee9c21e3a9f0774ed8f0f599b03f0f58f"} Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.093174 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5"] Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.094874 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.098460 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.130638 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5"] Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.210939 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.211017 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rld9f\" (UniqueName: \"kubernetes.io/projected/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-kube-api-access-rld9f\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.211139 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.312728 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.312785 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.312816 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rld9f\" (UniqueName: \"kubernetes.io/projected/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-kube-api-access-rld9f\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.313364 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.313368 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.330303 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rld9f\" (UniqueName: \"kubernetes.io/projected/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-kube-api-access-rld9f\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.422057 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:37 crc kubenswrapper[4919]: I0109 13:43:37.680334 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5"] Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.394321 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-bffts" podUID="58013dad-1347-4da5-8314-495388d1b5c2" containerName="console" containerID="cri-o://7bb01366729a4aa01c36225ea7d6284c32529ecc101133a631ad815401aba2bb" gracePeriod=15 Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.548162 4919 generic.go:334] "Generic (PLEG): container finished" podID="ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" containerID="6d54d905de452a72c504ce398292de03a100d464b039ca16a3c44a11f759099b" exitCode=0 Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.548262 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" event={"ID":"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492","Type":"ContainerDied","Data":"6d54d905de452a72c504ce398292de03a100d464b039ca16a3c44a11f759099b"} Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.548739 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" event={"ID":"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492","Type":"ContainerStarted","Data":"ffa9af24f9a11b9245556d3bf04913bdb19f3c8c14fb5a461ced59c627d07e46"} Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.552479 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bffts_58013dad-1347-4da5-8314-495388d1b5c2/console/0.log" Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.552527 4919 generic.go:334] "Generic (PLEG): container finished" podID="58013dad-1347-4da5-8314-495388d1b5c2" containerID="7bb01366729a4aa01c36225ea7d6284c32529ecc101133a631ad815401aba2bb" exitCode=2 Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.552555 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bffts" event={"ID":"58013dad-1347-4da5-8314-495388d1b5c2","Type":"ContainerDied","Data":"7bb01366729a4aa01c36225ea7d6284c32529ecc101133a631ad815401aba2bb"} Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.840888 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bffts_58013dad-1347-4da5-8314-495388d1b5c2/console/0.log" Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.840989 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.952657 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-oauth-config\") pod \"58013dad-1347-4da5-8314-495388d1b5c2\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.952744 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-oauth-serving-cert\") pod \"58013dad-1347-4da5-8314-495388d1b5c2\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.952797 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-service-ca\") pod \"58013dad-1347-4da5-8314-495388d1b5c2\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.952829 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-serving-cert\") pod \"58013dad-1347-4da5-8314-495388d1b5c2\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.952903 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7thdl\" (UniqueName: \"kubernetes.io/projected/58013dad-1347-4da5-8314-495388d1b5c2-kube-api-access-7thdl\") pod \"58013dad-1347-4da5-8314-495388d1b5c2\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.952977 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-console-config\") pod \"58013dad-1347-4da5-8314-495388d1b5c2\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.953017 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-trusted-ca-bundle\") pod \"58013dad-1347-4da5-8314-495388d1b5c2\" (UID: \"58013dad-1347-4da5-8314-495388d1b5c2\") " Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.954544 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-service-ca" (OuterVolumeSpecName: "service-ca") pod "58013dad-1347-4da5-8314-495388d1b5c2" (UID: "58013dad-1347-4da5-8314-495388d1b5c2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.954572 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-console-config" (OuterVolumeSpecName: "console-config") pod "58013dad-1347-4da5-8314-495388d1b5c2" (UID: "58013dad-1347-4da5-8314-495388d1b5c2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.955120 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "58013dad-1347-4da5-8314-495388d1b5c2" (UID: "58013dad-1347-4da5-8314-495388d1b5c2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.956105 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "58013dad-1347-4da5-8314-495388d1b5c2" (UID: "58013dad-1347-4da5-8314-495388d1b5c2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.961139 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "58013dad-1347-4da5-8314-495388d1b5c2" (UID: "58013dad-1347-4da5-8314-495388d1b5c2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.962037 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "58013dad-1347-4da5-8314-495388d1b5c2" (UID: "58013dad-1347-4da5-8314-495388d1b5c2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:43:38 crc kubenswrapper[4919]: I0109 13:43:38.971363 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58013dad-1347-4da5-8314-495388d1b5c2-kube-api-access-7thdl" (OuterVolumeSpecName: "kube-api-access-7thdl") pod "58013dad-1347-4da5-8314-495388d1b5c2" (UID: "58013dad-1347-4da5-8314-495388d1b5c2"). InnerVolumeSpecName "kube-api-access-7thdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.054612 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7thdl\" (UniqueName: \"kubernetes.io/projected/58013dad-1347-4da5-8314-495388d1b5c2-kube-api-access-7thdl\") on node \"crc\" DevicePath \"\"" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.054664 4919 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-console-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.054710 4919 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.054724 4919 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.054737 4919 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.054751 4919 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58013dad-1347-4da5-8314-495388d1b5c2-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.054765 4919 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58013dad-1347-4da5-8314-495388d1b5c2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.560305 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bffts_58013dad-1347-4da5-8314-495388d1b5c2/console/0.log" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.560369 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bffts" event={"ID":"58013dad-1347-4da5-8314-495388d1b5c2","Type":"ContainerDied","Data":"aa73b81ba41d1482ba8b767364c142acf269a7e2994acd3b43233557e937a53a"} Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.560420 4919 scope.go:117] "RemoveContainer" containerID="7bb01366729a4aa01c36225ea7d6284c32529ecc101133a631ad815401aba2bb" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.560455 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bffts" Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.588385 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bffts"] Jan 09 13:43:39 crc kubenswrapper[4919]: I0109 13:43:39.592445 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-bffts"] Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.443319 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nrtxz"] Jan 09 13:43:40 crc kubenswrapper[4919]: E0109 13:43:40.443545 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58013dad-1347-4da5-8314-495388d1b5c2" containerName="console" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.443556 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="58013dad-1347-4da5-8314-495388d1b5c2" containerName="console" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.443687 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="58013dad-1347-4da5-8314-495388d1b5c2" containerName="console" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.444469 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.455015 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nrtxz"] Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.568776 4919 generic.go:334] "Generic (PLEG): container finished" podID="ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" containerID="22ecdc06fe65d6a7f68b0c565d05be3f70f2b50c7d8f815cb11b38409f8e5166" exitCode=0 Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.568846 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" event={"ID":"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492","Type":"ContainerDied","Data":"22ecdc06fe65d6a7f68b0c565d05be3f70f2b50c7d8f815cb11b38409f8e5166"} Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.576738 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-utilities\") pod \"redhat-operators-nrtxz\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.577123 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-catalog-content\") pod \"redhat-operators-nrtxz\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.577172 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j29mt\" (UniqueName: \"kubernetes.io/projected/60c23c00-5a89-4941-b801-db831f268244-kube-api-access-j29mt\") pod \"redhat-operators-nrtxz\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.678359 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-utilities\") pod \"redhat-operators-nrtxz\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.678432 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-catalog-content\") pod \"redhat-operators-nrtxz\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.678472 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j29mt\" (UniqueName: \"kubernetes.io/projected/60c23c00-5a89-4941-b801-db831f268244-kube-api-access-j29mt\") pod \"redhat-operators-nrtxz\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.679097 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-utilities\") pod \"redhat-operators-nrtxz\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.679164 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-catalog-content\") pod \"redhat-operators-nrtxz\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.702154 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j29mt\" (UniqueName: \"kubernetes.io/projected/60c23c00-5a89-4941-b801-db831f268244-kube-api-access-j29mt\") pod \"redhat-operators-nrtxz\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.758817 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.759366 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58013dad-1347-4da5-8314-495388d1b5c2" path="/var/lib/kubelet/pods/58013dad-1347-4da5-8314-495388d1b5c2/volumes" Jan 09 13:43:40 crc kubenswrapper[4919]: I0109 13:43:40.953587 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nrtxz"] Jan 09 13:43:41 crc kubenswrapper[4919]: I0109 13:43:41.578432 4919 generic.go:334] "Generic (PLEG): container finished" podID="60c23c00-5a89-4941-b801-db831f268244" containerID="d21a94eb0aa73ae6bbc350b673540981758fa47d54317ff7430e089fd77970b8" exitCode=0 Jan 09 13:43:41 crc kubenswrapper[4919]: I0109 13:43:41.579098 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrtxz" event={"ID":"60c23c00-5a89-4941-b801-db831f268244","Type":"ContainerDied","Data":"d21a94eb0aa73ae6bbc350b673540981758fa47d54317ff7430e089fd77970b8"} Jan 09 13:43:41 crc kubenswrapper[4919]: I0109 13:43:41.579161 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrtxz" event={"ID":"60c23c00-5a89-4941-b801-db831f268244","Type":"ContainerStarted","Data":"2ebd4a55b51f21ab205f783194c8b6ad419d3a7935831471ed642f9b72d10131"} Jan 09 13:43:41 crc kubenswrapper[4919]: I0109 13:43:41.581628 4919 generic.go:334] "Generic (PLEG): container finished" podID="ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" containerID="b41bbb14eee9899453e5cfbbc90770e6103ce018cfe7d6e5f19711c24e084025" exitCode=0 Jan 09 13:43:41 crc kubenswrapper[4919]: I0109 13:43:41.581665 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" event={"ID":"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492","Type":"ContainerDied","Data":"b41bbb14eee9899453e5cfbbc90770e6103ce018cfe7d6e5f19711c24e084025"} Jan 09 13:43:42 crc kubenswrapper[4919]: I0109 13:43:42.839282 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:42 crc kubenswrapper[4919]: I0109 13:43:42.907031 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-bundle\") pod \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " Jan 09 13:43:42 crc kubenswrapper[4919]: I0109 13:43:42.907087 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-util\") pod \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " Jan 09 13:43:42 crc kubenswrapper[4919]: I0109 13:43:42.907157 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rld9f\" (UniqueName: \"kubernetes.io/projected/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-kube-api-access-rld9f\") pod \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\" (UID: \"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492\") " Jan 09 13:43:42 crc kubenswrapper[4919]: I0109 13:43:42.908581 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-bundle" (OuterVolumeSpecName: "bundle") pod "ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" (UID: "ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:43:42 crc kubenswrapper[4919]: I0109 13:43:42.913359 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-kube-api-access-rld9f" (OuterVolumeSpecName: "kube-api-access-rld9f") pod "ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" (UID: "ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492"). InnerVolumeSpecName "kube-api-access-rld9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:43:42 crc kubenswrapper[4919]: I0109 13:43:42.922043 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-util" (OuterVolumeSpecName: "util") pod "ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" (UID: "ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:43:43 crc kubenswrapper[4919]: I0109 13:43:43.008827 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rld9f\" (UniqueName: \"kubernetes.io/projected/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-kube-api-access-rld9f\") on node \"crc\" DevicePath \"\"" Jan 09 13:43:43 crc kubenswrapper[4919]: I0109 13:43:43.008859 4919 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:43:43 crc kubenswrapper[4919]: I0109 13:43:43.008868 4919 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492-util\") on node \"crc\" DevicePath \"\"" Jan 09 13:43:43 crc kubenswrapper[4919]: I0109 13:43:43.607889 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" Jan 09 13:43:43 crc kubenswrapper[4919]: I0109 13:43:43.607892 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5" event={"ID":"ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492","Type":"ContainerDied","Data":"ffa9af24f9a11b9245556d3bf04913bdb19f3c8c14fb5a461ced59c627d07e46"} Jan 09 13:43:43 crc kubenswrapper[4919]: I0109 13:43:43.608023 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffa9af24f9a11b9245556d3bf04913bdb19f3c8c14fb5a461ced59c627d07e46" Jan 09 13:43:43 crc kubenswrapper[4919]: I0109 13:43:43.610358 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrtxz" event={"ID":"60c23c00-5a89-4941-b801-db831f268244","Type":"ContainerStarted","Data":"571d9736fcf56fffbcce7e80ea3902631cefff1d5d94b9792bdacc95dff184a9"} Jan 09 13:43:45 crc kubenswrapper[4919]: I0109 13:43:45.628201 4919 generic.go:334] "Generic (PLEG): container finished" podID="60c23c00-5a89-4941-b801-db831f268244" containerID="571d9736fcf56fffbcce7e80ea3902631cefff1d5d94b9792bdacc95dff184a9" exitCode=0 Jan 09 13:43:45 crc kubenswrapper[4919]: I0109 13:43:45.628249 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrtxz" event={"ID":"60c23c00-5a89-4941-b801-db831f268244","Type":"ContainerDied","Data":"571d9736fcf56fffbcce7e80ea3902631cefff1d5d94b9792bdacc95dff184a9"} Jan 09 13:43:46 crc kubenswrapper[4919]: I0109 13:43:46.637840 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrtxz" event={"ID":"60c23c00-5a89-4941-b801-db831f268244","Type":"ContainerStarted","Data":"8ca7b7afd571a7e32b731792589f02fdcccf14494560c67b7a1bdb91d751975e"} Jan 09 13:43:50 crc kubenswrapper[4919]: I0109 13:43:50.759335 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:50 crc kubenswrapper[4919]: I0109 13:43:50.759564 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:43:51 crc kubenswrapper[4919]: I0109 13:43:51.796732 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nrtxz" podUID="60c23c00-5a89-4941-b801-db831f268244" containerName="registry-server" probeResult="failure" output=< Jan 09 13:43:51 crc kubenswrapper[4919]: timeout: failed to connect service ":50051" within 1s Jan 09 13:43:51 crc kubenswrapper[4919]: > Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.611551 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nrtxz" podStartSLOduration=12.888480838 podStartE2EDuration="17.611534363s" podCreationTimestamp="2026-01-09 13:43:40 +0000 UTC" firstStartedPulling="2026-01-09 13:43:41.580016689 +0000 UTC m=+801.127856139" lastFinishedPulling="2026-01-09 13:43:46.303070204 +0000 UTC m=+805.850909664" observedRunningTime="2026-01-09 13:43:46.664112453 +0000 UTC m=+806.211951913" watchObservedRunningTime="2026-01-09 13:43:57.611534363 +0000 UTC m=+817.159373813" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.614621 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9"] Jan 09 13:43:57 crc kubenswrapper[4919]: E0109 13:43:57.614843 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" containerName="extract" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.614857 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" containerName="extract" Jan 09 13:43:57 crc kubenswrapper[4919]: E0109 13:43:57.614870 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" containerName="util" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.614877 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" containerName="util" Jan 09 13:43:57 crc kubenswrapper[4919]: E0109 13:43:57.614888 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" containerName="pull" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.614893 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" containerName="pull" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.615010 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492" containerName="extract" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.615429 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.617470 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.617684 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.617750 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.618809 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nd287" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.618923 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.633249 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9"] Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.706508 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df525dd0-f23f-4348-a4e0-4330e0d9ad91-apiservice-cert\") pod \"metallb-operator-controller-manager-5bdcf498b5-twbl9\" (UID: \"df525dd0-f23f-4348-a4e0-4330e0d9ad91\") " pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.706724 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df525dd0-f23f-4348-a4e0-4330e0d9ad91-webhook-cert\") pod \"metallb-operator-controller-manager-5bdcf498b5-twbl9\" (UID: \"df525dd0-f23f-4348-a4e0-4330e0d9ad91\") " pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.706817 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4p88\" (UniqueName: \"kubernetes.io/projected/df525dd0-f23f-4348-a4e0-4330e0d9ad91-kube-api-access-t4p88\") pod \"metallb-operator-controller-manager-5bdcf498b5-twbl9\" (UID: \"df525dd0-f23f-4348-a4e0-4330e0d9ad91\") " pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.808495 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4p88\" (UniqueName: \"kubernetes.io/projected/df525dd0-f23f-4348-a4e0-4330e0d9ad91-kube-api-access-t4p88\") pod \"metallb-operator-controller-manager-5bdcf498b5-twbl9\" (UID: \"df525dd0-f23f-4348-a4e0-4330e0d9ad91\") " pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.808716 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df525dd0-f23f-4348-a4e0-4330e0d9ad91-apiservice-cert\") pod \"metallb-operator-controller-manager-5bdcf498b5-twbl9\" (UID: \"df525dd0-f23f-4348-a4e0-4330e0d9ad91\") " pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.808841 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df525dd0-f23f-4348-a4e0-4330e0d9ad91-webhook-cert\") pod \"metallb-operator-controller-manager-5bdcf498b5-twbl9\" (UID: \"df525dd0-f23f-4348-a4e0-4330e0d9ad91\") " pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.815734 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df525dd0-f23f-4348-a4e0-4330e0d9ad91-webhook-cert\") pod \"metallb-operator-controller-manager-5bdcf498b5-twbl9\" (UID: \"df525dd0-f23f-4348-a4e0-4330e0d9ad91\") " pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.817580 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df525dd0-f23f-4348-a4e0-4330e0d9ad91-apiservice-cert\") pod \"metallb-operator-controller-manager-5bdcf498b5-twbl9\" (UID: \"df525dd0-f23f-4348-a4e0-4330e0d9ad91\") " pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.826987 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4p88\" (UniqueName: \"kubernetes.io/projected/df525dd0-f23f-4348-a4e0-4330e0d9ad91-kube-api-access-t4p88\") pod \"metallb-operator-controller-manager-5bdcf498b5-twbl9\" (UID: \"df525dd0-f23f-4348-a4e0-4330e0d9ad91\") " pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.928080 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.949339 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb"] Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.950334 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.955475 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.955629 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-57xft" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.955711 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 09 13:43:57 crc kubenswrapper[4919]: I0109 13:43:57.972730 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb"] Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.011422 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cb9da00-2fea-4925-b3ef-c9438a2b5c18-apiservice-cert\") pod \"metallb-operator-webhook-server-56d5fdcf86-2jwkb\" (UID: \"0cb9da00-2fea-4925-b3ef-c9438a2b5c18\") " pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.011731 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cb9da00-2fea-4925-b3ef-c9438a2b5c18-webhook-cert\") pod \"metallb-operator-webhook-server-56d5fdcf86-2jwkb\" (UID: \"0cb9da00-2fea-4925-b3ef-c9438a2b5c18\") " pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.011861 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww6rd\" (UniqueName: \"kubernetes.io/projected/0cb9da00-2fea-4925-b3ef-c9438a2b5c18-kube-api-access-ww6rd\") pod \"metallb-operator-webhook-server-56d5fdcf86-2jwkb\" (UID: \"0cb9da00-2fea-4925-b3ef-c9438a2b5c18\") " pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.113930 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cb9da00-2fea-4925-b3ef-c9438a2b5c18-apiservice-cert\") pod \"metallb-operator-webhook-server-56d5fdcf86-2jwkb\" (UID: \"0cb9da00-2fea-4925-b3ef-c9438a2b5c18\") " pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.114020 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cb9da00-2fea-4925-b3ef-c9438a2b5c18-webhook-cert\") pod \"metallb-operator-webhook-server-56d5fdcf86-2jwkb\" (UID: \"0cb9da00-2fea-4925-b3ef-c9438a2b5c18\") " pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.114048 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww6rd\" (UniqueName: \"kubernetes.io/projected/0cb9da00-2fea-4925-b3ef-c9438a2b5c18-kube-api-access-ww6rd\") pod \"metallb-operator-webhook-server-56d5fdcf86-2jwkb\" (UID: \"0cb9da00-2fea-4925-b3ef-c9438a2b5c18\") " pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.118835 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cb9da00-2fea-4925-b3ef-c9438a2b5c18-webhook-cert\") pod \"metallb-operator-webhook-server-56d5fdcf86-2jwkb\" (UID: \"0cb9da00-2fea-4925-b3ef-c9438a2b5c18\") " pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.119460 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cb9da00-2fea-4925-b3ef-c9438a2b5c18-apiservice-cert\") pod \"metallb-operator-webhook-server-56d5fdcf86-2jwkb\" (UID: \"0cb9da00-2fea-4925-b3ef-c9438a2b5c18\") " pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.136880 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww6rd\" (UniqueName: \"kubernetes.io/projected/0cb9da00-2fea-4925-b3ef-c9438a2b5c18-kube-api-access-ww6rd\") pod \"metallb-operator-webhook-server-56d5fdcf86-2jwkb\" (UID: \"0cb9da00-2fea-4925-b3ef-c9438a2b5c18\") " pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.175806 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9"] Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.306732 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.700548 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" event={"ID":"df525dd0-f23f-4348-a4e0-4330e0d9ad91","Type":"ContainerStarted","Data":"239a106ca637025cc5d64789149958562ab2ba1322b833477ec3fe3ff5f584fa"} Jan 09 13:43:58 crc kubenswrapper[4919]: I0109 13:43:58.759007 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb"] Jan 09 13:43:58 crc kubenswrapper[4919]: W0109 13:43:58.759517 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cb9da00_2fea_4925_b3ef_c9438a2b5c18.slice/crio-17ad6a296c5ac8db294ec30433b8183429ff41b57e138518cd9a3041fbf16c8e WatchSource:0}: Error finding container 17ad6a296c5ac8db294ec30433b8183429ff41b57e138518cd9a3041fbf16c8e: Status 404 returned error can't find the container with id 17ad6a296c5ac8db294ec30433b8183429ff41b57e138518cd9a3041fbf16c8e Jan 09 13:43:59 crc kubenswrapper[4919]: I0109 13:43:59.710731 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" event={"ID":"0cb9da00-2fea-4925-b3ef-c9438a2b5c18","Type":"ContainerStarted","Data":"17ad6a296c5ac8db294ec30433b8183429ff41b57e138518cd9a3041fbf16c8e"} Jan 09 13:44:00 crc kubenswrapper[4919]: I0109 13:44:00.869708 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:44:00 crc kubenswrapper[4919]: I0109 13:44:00.913513 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:44:03 crc kubenswrapper[4919]: I0109 13:44:03.035297 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nrtxz"] Jan 09 13:44:03 crc kubenswrapper[4919]: I0109 13:44:03.035773 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nrtxz" podUID="60c23c00-5a89-4941-b801-db831f268244" containerName="registry-server" containerID="cri-o://8ca7b7afd571a7e32b731792589f02fdcccf14494560c67b7a1bdb91d751975e" gracePeriod=2 Jan 09 13:44:03 crc kubenswrapper[4919]: I0109 13:44:03.733913 4919 generic.go:334] "Generic (PLEG): container finished" podID="60c23c00-5a89-4941-b801-db831f268244" containerID="8ca7b7afd571a7e32b731792589f02fdcccf14494560c67b7a1bdb91d751975e" exitCode=0 Jan 09 13:44:03 crc kubenswrapper[4919]: I0109 13:44:03.733955 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrtxz" event={"ID":"60c23c00-5a89-4941-b801-db831f268244","Type":"ContainerDied","Data":"8ca7b7afd571a7e32b731792589f02fdcccf14494560c67b7a1bdb91d751975e"} Jan 09 13:44:05 crc kubenswrapper[4919]: I0109 13:44:05.824823 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:44:05 crc kubenswrapper[4919]: I0109 13:44:05.967264 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-catalog-content\") pod \"60c23c00-5a89-4941-b801-db831f268244\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " Jan 09 13:44:05 crc kubenswrapper[4919]: I0109 13:44:05.967377 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j29mt\" (UniqueName: \"kubernetes.io/projected/60c23c00-5a89-4941-b801-db831f268244-kube-api-access-j29mt\") pod \"60c23c00-5a89-4941-b801-db831f268244\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " Jan 09 13:44:05 crc kubenswrapper[4919]: I0109 13:44:05.967436 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-utilities\") pod \"60c23c00-5a89-4941-b801-db831f268244\" (UID: \"60c23c00-5a89-4941-b801-db831f268244\") " Jan 09 13:44:05 crc kubenswrapper[4919]: I0109 13:44:05.968429 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-utilities" (OuterVolumeSpecName: "utilities") pod "60c23c00-5a89-4941-b801-db831f268244" (UID: "60c23c00-5a89-4941-b801-db831f268244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:44:05 crc kubenswrapper[4919]: I0109 13:44:05.972763 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c23c00-5a89-4941-b801-db831f268244-kube-api-access-j29mt" (OuterVolumeSpecName: "kube-api-access-j29mt") pod "60c23c00-5a89-4941-b801-db831f268244" (UID: "60c23c00-5a89-4941-b801-db831f268244"). InnerVolumeSpecName "kube-api-access-j29mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.068128 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.068163 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j29mt\" (UniqueName: \"kubernetes.io/projected/60c23c00-5a89-4941-b801-db831f268244-kube-api-access-j29mt\") on node \"crc\" DevicePath \"\"" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.085154 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60c23c00-5a89-4941-b801-db831f268244" (UID: "60c23c00-5a89-4941-b801-db831f268244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.169325 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c23c00-5a89-4941-b801-db831f268244-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.815333 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrtxz" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.815330 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrtxz" event={"ID":"60c23c00-5a89-4941-b801-db831f268244","Type":"ContainerDied","Data":"2ebd4a55b51f21ab205f783194c8b6ad419d3a7935831471ed642f9b72d10131"} Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.815408 4919 scope.go:117] "RemoveContainer" containerID="8ca7b7afd571a7e32b731792589f02fdcccf14494560c67b7a1bdb91d751975e" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.816797 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" event={"ID":"0cb9da00-2fea-4925-b3ef-c9438a2b5c18","Type":"ContainerStarted","Data":"dd4ee1259474932d169b11f30c598d143982d3fd28278ebf9e8fa9fdab7be392"} Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.816997 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.818798 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" event={"ID":"df525dd0-f23f-4348-a4e0-4330e0d9ad91","Type":"ContainerStarted","Data":"d63e6de1f2d4e4367874cf23940d59aab0ac00aeb0670f6f29b73e7b3c683fa8"} Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.818918 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.831394 4919 scope.go:117] "RemoveContainer" containerID="571d9736fcf56fffbcce7e80ea3902631cefff1d5d94b9792bdacc95dff184a9" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.844291 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" podStartSLOduration=2.6745607270000002 podStartE2EDuration="9.844269867s" podCreationTimestamp="2026-01-09 13:43:57 +0000 UTC" firstStartedPulling="2026-01-09 13:43:58.762634192 +0000 UTC m=+818.310473642" lastFinishedPulling="2026-01-09 13:44:05.932343332 +0000 UTC m=+825.480182782" observedRunningTime="2026-01-09 13:44:06.84357313 +0000 UTC m=+826.391412580" watchObservedRunningTime="2026-01-09 13:44:06.844269867 +0000 UTC m=+826.392109327" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.853427 4919 scope.go:117] "RemoveContainer" containerID="d21a94eb0aa73ae6bbc350b673540981758fa47d54317ff7430e089fd77970b8" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.866649 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" podStartSLOduration=2.148302054 podStartE2EDuration="9.866628465s" podCreationTimestamp="2026-01-09 13:43:57 +0000 UTC" firstStartedPulling="2026-01-09 13:43:58.192025643 +0000 UTC m=+817.739865093" lastFinishedPulling="2026-01-09 13:44:05.910352054 +0000 UTC m=+825.458191504" observedRunningTime="2026-01-09 13:44:06.866125562 +0000 UTC m=+826.413965022" watchObservedRunningTime="2026-01-09 13:44:06.866628465 +0000 UTC m=+826.414467925" Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.887824 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nrtxz"] Jan 09 13:44:06 crc kubenswrapper[4919]: I0109 13:44:06.892885 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nrtxz"] Jan 09 13:44:08 crc kubenswrapper[4919]: I0109 13:44:08.760097 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c23c00-5a89-4941-b801-db831f268244" path="/var/lib/kubelet/pods/60c23c00-5a89-4941-b801-db831f268244/volumes" Jan 09 13:44:18 crc kubenswrapper[4919]: I0109 13:44:18.310834 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-56d5fdcf86-2jwkb" Jan 09 13:44:37 crc kubenswrapper[4919]: I0109 13:44:37.930650 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5bdcf498b5-twbl9" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.635233 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fwdhc"] Jan 09 13:44:38 crc kubenswrapper[4919]: E0109 13:44:38.635468 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c23c00-5a89-4941-b801-db831f268244" containerName="registry-server" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.635480 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c23c00-5a89-4941-b801-db831f268244" containerName="registry-server" Jan 09 13:44:38 crc kubenswrapper[4919]: E0109 13:44:38.635496 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c23c00-5a89-4941-b801-db831f268244" containerName="extract-content" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.635503 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c23c00-5a89-4941-b801-db831f268244" containerName="extract-content" Jan 09 13:44:38 crc kubenswrapper[4919]: E0109 13:44:38.635513 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c23c00-5a89-4941-b801-db831f268244" containerName="extract-utilities" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.635522 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c23c00-5a89-4941-b801-db831f268244" containerName="extract-utilities" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.635657 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c23c00-5a89-4941-b801-db831f268244" containerName="registry-server" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.637534 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.644401 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.645281 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tkz5h" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.645361 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.677393 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf"] Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.678042 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.680482 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.788463 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b452f91-af7c-48e8-b137-3c39a355305a-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wt2zf\" (UID: \"9b452f91-af7c-48e8-b137-3c39a355305a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.788525 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg4sm\" (UniqueName: \"kubernetes.io/projected/9b452f91-af7c-48e8-b137-3c39a355305a-kube-api-access-bg4sm\") pod \"frr-k8s-webhook-server-7784b6fcf-wt2zf\" (UID: \"9b452f91-af7c-48e8-b137-3c39a355305a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.788549 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-frr-conf\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.788598 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-reloader\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.788721 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-metrics\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.788756 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-frr-startup\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.788778 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-metrics-certs\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.788834 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-frr-sockets\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.788890 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkpkf\" (UniqueName: \"kubernetes.io/projected/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-kube-api-access-pkpkf\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.797815 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf"] Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.821035 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6kcvb"] Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.822036 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6kcvb" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.829066 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.829358 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.829468 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.829469 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-d5bgm" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.835412 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-grs8k"] Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.836838 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.840166 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-grs8k"] Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.843433 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.889976 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkpkf\" (UniqueName: \"kubernetes.io/projected/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-kube-api-access-pkpkf\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.890030 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b452f91-af7c-48e8-b137-3c39a355305a-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wt2zf\" (UID: \"9b452f91-af7c-48e8-b137-3c39a355305a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.890058 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg4sm\" (UniqueName: \"kubernetes.io/projected/9b452f91-af7c-48e8-b137-3c39a355305a-kube-api-access-bg4sm\") pod \"frr-k8s-webhook-server-7784b6fcf-wt2zf\" (UID: \"9b452f91-af7c-48e8-b137-3c39a355305a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.890076 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-frr-conf\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.890098 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-reloader\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.890112 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-metrics\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.890134 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-frr-startup\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.890150 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-metrics-certs\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.890197 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-frr-sockets\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.890608 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-frr-sockets\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.891907 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-metrics\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.892058 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-frr-conf\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.892092 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-reloader\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: E0109 13:44:38.892156 4919 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 09 13:44:38 crc kubenswrapper[4919]: E0109 13:44:38.892194 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-metrics-certs podName:27cc21f5-c63b-4678-a1f6-6be9c13f32fc nodeName:}" failed. No retries permitted until 2026-01-09 13:44:39.392179087 +0000 UTC m=+858.940018537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-metrics-certs") pod "frr-k8s-fwdhc" (UID: "27cc21f5-c63b-4678-a1f6-6be9c13f32fc") : secret "frr-k8s-certs-secret" not found Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.892868 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-frr-startup\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.898315 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b452f91-af7c-48e8-b137-3c39a355305a-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wt2zf\" (UID: \"9b452f91-af7c-48e8-b137-3c39a355305a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.915295 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkpkf\" (UniqueName: \"kubernetes.io/projected/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-kube-api-access-pkpkf\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.915385 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg4sm\" (UniqueName: \"kubernetes.io/projected/9b452f91-af7c-48e8-b137-3c39a355305a-kube-api-access-bg4sm\") pod \"frr-k8s-webhook-server-7784b6fcf-wt2zf\" (UID: \"9b452f91-af7c-48e8-b137-3c39a355305a\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.991765 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/256aa53e-2a76-437e-ac55-a8766f9e5c00-metrics-certs\") pod \"controller-5bddd4b946-grs8k\" (UID: \"256aa53e-2a76-437e-ac55-a8766f9e5c00\") " pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.991809 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvfv2\" (UniqueName: \"kubernetes.io/projected/256aa53e-2a76-437e-ac55-a8766f9e5c00-kube-api-access-mvfv2\") pod \"controller-5bddd4b946-grs8k\" (UID: \"256aa53e-2a76-437e-ac55-a8766f9e5c00\") " pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.991834 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.991867 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256aa53e-2a76-437e-ac55-a8766f9e5c00-cert\") pod \"controller-5bddd4b946-grs8k\" (UID: \"256aa53e-2a76-437e-ac55-a8766f9e5c00\") " pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.991888 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-metrics-certs\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.991940 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/33ed1894-533c-4314-b01c-758a5c2eebf8-metallb-excludel2\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:38 crc kubenswrapper[4919]: I0109 13:44:38.991963 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw95x\" (UniqueName: \"kubernetes.io/projected/33ed1894-533c-4314-b01c-758a5c2eebf8-kube-api-access-vw95x\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.049079 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.092786 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw95x\" (UniqueName: \"kubernetes.io/projected/33ed1894-533c-4314-b01c-758a5c2eebf8-kube-api-access-vw95x\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.093131 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/256aa53e-2a76-437e-ac55-a8766f9e5c00-metrics-certs\") pod \"controller-5bddd4b946-grs8k\" (UID: \"256aa53e-2a76-437e-ac55-a8766f9e5c00\") " pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.093278 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvfv2\" (UniqueName: \"kubernetes.io/projected/256aa53e-2a76-437e-ac55-a8766f9e5c00-kube-api-access-mvfv2\") pod \"controller-5bddd4b946-grs8k\" (UID: \"256aa53e-2a76-437e-ac55-a8766f9e5c00\") " pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.093387 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.093515 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256aa53e-2a76-437e-ac55-a8766f9e5c00-cert\") pod \"controller-5bddd4b946-grs8k\" (UID: \"256aa53e-2a76-437e-ac55-a8766f9e5c00\") " pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.093622 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-metrics-certs\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.093756 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/33ed1894-533c-4314-b01c-758a5c2eebf8-metallb-excludel2\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:39 crc kubenswrapper[4919]: E0109 13:44:39.093508 4919 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 09 13:44:39 crc kubenswrapper[4919]: E0109 13:44:39.093991 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist podName:33ed1894-533c-4314-b01c-758a5c2eebf8 nodeName:}" failed. No retries permitted until 2026-01-09 13:44:39.593968709 +0000 UTC m=+859.141808169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist") pod "speaker-6kcvb" (UID: "33ed1894-533c-4314-b01c-758a5c2eebf8") : secret "metallb-memberlist" not found Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.094859 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/33ed1894-533c-4314-b01c-758a5c2eebf8-metallb-excludel2\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.096065 4919 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.097674 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-metrics-certs\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.106504 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/256aa53e-2a76-437e-ac55-a8766f9e5c00-cert\") pod \"controller-5bddd4b946-grs8k\" (UID: \"256aa53e-2a76-437e-ac55-a8766f9e5c00\") " pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.106641 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/256aa53e-2a76-437e-ac55-a8766f9e5c00-metrics-certs\") pod \"controller-5bddd4b946-grs8k\" (UID: \"256aa53e-2a76-437e-ac55-a8766f9e5c00\") " pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.111952 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw95x\" (UniqueName: \"kubernetes.io/projected/33ed1894-533c-4314-b01c-758a5c2eebf8-kube-api-access-vw95x\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.112639 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvfv2\" (UniqueName: \"kubernetes.io/projected/256aa53e-2a76-437e-ac55-a8766f9e5c00-kube-api-access-mvfv2\") pod \"controller-5bddd4b946-grs8k\" (UID: \"256aa53e-2a76-437e-ac55-a8766f9e5c00\") " pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.159053 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.268402 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf"] Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.373630 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-grs8k"] Jan 09 13:44:39 crc kubenswrapper[4919]: W0109 13:44:39.380131 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod256aa53e_2a76_437e_ac55_a8766f9e5c00.slice/crio-007b2ede7f14a6b893162a754d299e1c320cba0b6c104d64889ef2d39bb20259 WatchSource:0}: Error finding container 007b2ede7f14a6b893162a754d299e1c320cba0b6c104d64889ef2d39bb20259: Status 404 returned error can't find the container with id 007b2ede7f14a6b893162a754d299e1c320cba0b6c104d64889ef2d39bb20259 Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.398747 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-metrics-certs\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.405139 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27cc21f5-c63b-4678-a1f6-6be9c13f32fc-metrics-certs\") pod \"frr-k8s-fwdhc\" (UID: \"27cc21f5-c63b-4678-a1f6-6be9c13f32fc\") " pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.601967 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:39 crc kubenswrapper[4919]: E0109 13:44:39.602378 4919 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 09 13:44:39 crc kubenswrapper[4919]: E0109 13:44:39.602507 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist podName:33ed1894-533c-4314-b01c-758a5c2eebf8 nodeName:}" failed. No retries permitted until 2026-01-09 13:44:40.602489634 +0000 UTC m=+860.150329084 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist") pod "speaker-6kcvb" (UID: "33ed1894-533c-4314-b01c-758a5c2eebf8") : secret "metallb-memberlist" not found Jan 09 13:44:39 crc kubenswrapper[4919]: I0109 13:44:39.620421 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:40 crc kubenswrapper[4919]: I0109 13:44:40.013481 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" event={"ID":"9b452f91-af7c-48e8-b137-3c39a355305a","Type":"ContainerStarted","Data":"fe16f2de527755c2d74bac42272886541e1a5bbe306af51ff69aa9a943e710eb"} Jan 09 13:44:40 crc kubenswrapper[4919]: I0109 13:44:40.014536 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-grs8k" event={"ID":"256aa53e-2a76-437e-ac55-a8766f9e5c00","Type":"ContainerStarted","Data":"007b2ede7f14a6b893162a754d299e1c320cba0b6c104d64889ef2d39bb20259"} Jan 09 13:44:40 crc kubenswrapper[4919]: I0109 13:44:40.615102 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:40 crc kubenswrapper[4919]: E0109 13:44:40.615298 4919 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 09 13:44:40 crc kubenswrapper[4919]: E0109 13:44:40.615387 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist podName:33ed1894-533c-4314-b01c-758a5c2eebf8 nodeName:}" failed. No retries permitted until 2026-01-09 13:44:42.615365301 +0000 UTC m=+862.163204751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist") pod "speaker-6kcvb" (UID: "33ed1894-533c-4314-b01c-758a5c2eebf8") : secret "metallb-memberlist" not found Jan 09 13:44:41 crc kubenswrapper[4919]: I0109 13:44:41.020012 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwdhc" event={"ID":"27cc21f5-c63b-4678-a1f6-6be9c13f32fc","Type":"ContainerStarted","Data":"c4011e599341effc58d290efcbfea62a39122ba6a9c3676f9752c53d33948662"} Jan 09 13:44:41 crc kubenswrapper[4919]: I0109 13:44:41.025862 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-grs8k" event={"ID":"256aa53e-2a76-437e-ac55-a8766f9e5c00","Type":"ContainerStarted","Data":"96d7576e85bd322f37981dca518dce64eb2144dc4ec6da26b96f91c2cce4b453"} Jan 09 13:44:42 crc kubenswrapper[4919]: I0109 13:44:42.039722 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-grs8k" event={"ID":"256aa53e-2a76-437e-ac55-a8766f9e5c00","Type":"ContainerStarted","Data":"e799e891f5de4436383c62a4a299656eb93be61b67f67584d98137aa5b878e45"} Jan 09 13:44:42 crc kubenswrapper[4919]: I0109 13:44:42.039847 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:44:42 crc kubenswrapper[4919]: I0109 13:44:42.064753 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-grs8k" podStartSLOduration=4.064736907 podStartE2EDuration="4.064736907s" podCreationTimestamp="2026-01-09 13:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:44:42.063341672 +0000 UTC m=+861.611181122" watchObservedRunningTime="2026-01-09 13:44:42.064736907 +0000 UTC m=+861.612576357" Jan 09 13:44:42 crc kubenswrapper[4919]: I0109 13:44:42.650956 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:42 crc kubenswrapper[4919]: I0109 13:44:42.674858 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/33ed1894-533c-4314-b01c-758a5c2eebf8-memberlist\") pod \"speaker-6kcvb\" (UID: \"33ed1894-533c-4314-b01c-758a5c2eebf8\") " pod="metallb-system/speaker-6kcvb" Jan 09 13:44:42 crc kubenswrapper[4919]: I0109 13:44:42.747739 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6kcvb" Jan 09 13:44:42 crc kubenswrapper[4919]: W0109 13:44:42.773497 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ed1894_533c_4314_b01c_758a5c2eebf8.slice/crio-8e30cf6b233cd515eac3997aff992dbad84aa7ce98473b7173c84b1a7eabaf10 WatchSource:0}: Error finding container 8e30cf6b233cd515eac3997aff992dbad84aa7ce98473b7173c84b1a7eabaf10: Status 404 returned error can't find the container with id 8e30cf6b233cd515eac3997aff992dbad84aa7ce98473b7173c84b1a7eabaf10 Jan 09 13:44:43 crc kubenswrapper[4919]: I0109 13:44:43.054890 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6kcvb" event={"ID":"33ed1894-533c-4314-b01c-758a5c2eebf8","Type":"ContainerStarted","Data":"8e30cf6b233cd515eac3997aff992dbad84aa7ce98473b7173c84b1a7eabaf10"} Jan 09 13:44:44 crc kubenswrapper[4919]: I0109 13:44:44.077260 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6kcvb" event={"ID":"33ed1894-533c-4314-b01c-758a5c2eebf8","Type":"ContainerStarted","Data":"5f083d17c15deed2022203793ec6a5fef76ead53fe34478a8864269c694168af"} Jan 09 13:44:44 crc kubenswrapper[4919]: I0109 13:44:44.077586 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6kcvb" Jan 09 13:44:44 crc kubenswrapper[4919]: I0109 13:44:44.077598 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6kcvb" event={"ID":"33ed1894-533c-4314-b01c-758a5c2eebf8","Type":"ContainerStarted","Data":"4364b5ae2c8e0d20a998294e7d13639a9238492874169505041996ab1f13c1e1"} Jan 09 13:44:44 crc kubenswrapper[4919]: I0109 13:44:44.096021 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6kcvb" podStartSLOduration=6.096001435 podStartE2EDuration="6.096001435s" podCreationTimestamp="2026-01-09 13:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:44:44.095751019 +0000 UTC m=+863.643590479" watchObservedRunningTime="2026-01-09 13:44:44.096001435 +0000 UTC m=+863.643840885" Jan 09 13:44:49 crc kubenswrapper[4919]: I0109 13:44:49.191974 4919 generic.go:334] "Generic (PLEG): container finished" podID="27cc21f5-c63b-4678-a1f6-6be9c13f32fc" containerID="52a748f64b8e39f504e934b9c9d4cdf4babcdffef373a55f6f69562746096929" exitCode=0 Jan 09 13:44:49 crc kubenswrapper[4919]: I0109 13:44:49.192023 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwdhc" event={"ID":"27cc21f5-c63b-4678-a1f6-6be9c13f32fc","Type":"ContainerDied","Data":"52a748f64b8e39f504e934b9c9d4cdf4babcdffef373a55f6f69562746096929"} Jan 09 13:44:49 crc kubenswrapper[4919]: I0109 13:44:49.195589 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" event={"ID":"9b452f91-af7c-48e8-b137-3c39a355305a","Type":"ContainerStarted","Data":"faa343065075505c1ea0ac3d5b43e61e7336963f3a19eac3b30a2cd59698e2ca"} Jan 09 13:44:49 crc kubenswrapper[4919]: I0109 13:44:49.195959 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" Jan 09 13:44:49 crc kubenswrapper[4919]: I0109 13:44:49.229256 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" podStartSLOduration=1.883874535 podStartE2EDuration="11.229234921s" podCreationTimestamp="2026-01-09 13:44:38 +0000 UTC" firstStartedPulling="2026-01-09 13:44:39.283500471 +0000 UTC m=+858.831339921" lastFinishedPulling="2026-01-09 13:44:48.628860857 +0000 UTC m=+868.176700307" observedRunningTime="2026-01-09 13:44:49.227675743 +0000 UTC m=+868.775515193" watchObservedRunningTime="2026-01-09 13:44:49.229234921 +0000 UTC m=+868.777074371" Jan 09 13:44:50 crc kubenswrapper[4919]: I0109 13:44:50.201806 4919 generic.go:334] "Generic (PLEG): container finished" podID="27cc21f5-c63b-4678-a1f6-6be9c13f32fc" containerID="eb2d31534957365f090d735bf5cb481979d0124c3d702f003e21e3a444365797" exitCode=0 Jan 09 13:44:50 crc kubenswrapper[4919]: I0109 13:44:50.201854 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwdhc" event={"ID":"27cc21f5-c63b-4678-a1f6-6be9c13f32fc","Type":"ContainerDied","Data":"eb2d31534957365f090d735bf5cb481979d0124c3d702f003e21e3a444365797"} Jan 09 13:44:51 crc kubenswrapper[4919]: I0109 13:44:51.209629 4919 generic.go:334] "Generic (PLEG): container finished" podID="27cc21f5-c63b-4678-a1f6-6be9c13f32fc" containerID="7b07794cf5828e65f4d960dbd31f20e47033ef8da20867f4d6b58f852ba3de5f" exitCode=0 Jan 09 13:44:51 crc kubenswrapper[4919]: I0109 13:44:51.209915 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwdhc" event={"ID":"27cc21f5-c63b-4678-a1f6-6be9c13f32fc","Type":"ContainerDied","Data":"7b07794cf5828e65f4d960dbd31f20e47033ef8da20867f4d6b58f852ba3de5f"} Jan 09 13:44:52 crc kubenswrapper[4919]: I0109 13:44:52.306995 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwdhc" event={"ID":"27cc21f5-c63b-4678-a1f6-6be9c13f32fc","Type":"ContainerStarted","Data":"4e4f176775d15d9ca0ac8ed3aae027fbaeb08ceb64ef09198950aa8c462276db"} Jan 09 13:44:52 crc kubenswrapper[4919]: I0109 13:44:52.307312 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwdhc" event={"ID":"27cc21f5-c63b-4678-a1f6-6be9c13f32fc","Type":"ContainerStarted","Data":"1f83d610a7c6c99435d3317cd3207a2c4b32ab70f0d06b0738e732d71484f050"} Jan 09 13:44:52 crc kubenswrapper[4919]: I0109 13:44:52.307322 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwdhc" event={"ID":"27cc21f5-c63b-4678-a1f6-6be9c13f32fc","Type":"ContainerStarted","Data":"0439281ce96930bd493fbc8bcf869a380fd4b34461e3a84f358e85fe2fd9e47b"} Jan 09 13:44:52 crc kubenswrapper[4919]: I0109 13:44:52.307332 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwdhc" event={"ID":"27cc21f5-c63b-4678-a1f6-6be9c13f32fc","Type":"ContainerStarted","Data":"37bef48611de54c7380d68ff80b7fc4e7d3df79f04e569c39fb12de31faf7833"} Jan 09 13:44:52 crc kubenswrapper[4919]: I0109 13:44:52.307340 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwdhc" event={"ID":"27cc21f5-c63b-4678-a1f6-6be9c13f32fc","Type":"ContainerStarted","Data":"9baeed539c4cddc2f10369ad26b3e3ddd52e048521122e05cefaa34e32dcd21d"} Jan 09 13:44:53 crc kubenswrapper[4919]: I0109 13:44:53.315665 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwdhc" event={"ID":"27cc21f5-c63b-4678-a1f6-6be9c13f32fc","Type":"ContainerStarted","Data":"11ec06fd1d8bf0c6801bddfb1e6859e14aed2bfb1ccafb275f03854da04276f0"} Jan 09 13:44:53 crc kubenswrapper[4919]: I0109 13:44:53.318681 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:53 crc kubenswrapper[4919]: I0109 13:44:53.339855 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fwdhc" podStartSLOduration=7.555683393 podStartE2EDuration="15.339839084s" podCreationTimestamp="2026-01-09 13:44:38 +0000 UTC" firstStartedPulling="2026-01-09 13:44:40.825489916 +0000 UTC m=+860.373329366" lastFinishedPulling="2026-01-09 13:44:48.609645607 +0000 UTC m=+868.157485057" observedRunningTime="2026-01-09 13:44:53.339106286 +0000 UTC m=+872.886945736" watchObservedRunningTime="2026-01-09 13:44:53.339839084 +0000 UTC m=+872.887678534" Jan 09 13:44:54 crc kubenswrapper[4919]: I0109 13:44:54.621405 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:54 crc kubenswrapper[4919]: I0109 13:44:54.659763 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:44:59 crc kubenswrapper[4919]: I0109 13:44:59.054615 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wt2zf" Jan 09 13:44:59 crc kubenswrapper[4919]: I0109 13:44:59.163264 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-grs8k" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.164001 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5"] Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.164788 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.167479 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.167831 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305e4023-ac44-4a22-ba43-2a2f67441647-config-volume\") pod \"collect-profiles-29466105-fr8d5\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.167868 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.167895 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27wh\" (UniqueName: \"kubernetes.io/projected/305e4023-ac44-4a22-ba43-2a2f67441647-kube-api-access-f27wh\") pod \"collect-profiles-29466105-fr8d5\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.168067 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/305e4023-ac44-4a22-ba43-2a2f67441647-secret-volume\") pod \"collect-profiles-29466105-fr8d5\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.184361 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5"] Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.269830 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/305e4023-ac44-4a22-ba43-2a2f67441647-secret-volume\") pod \"collect-profiles-29466105-fr8d5\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.270193 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305e4023-ac44-4a22-ba43-2a2f67441647-config-volume\") pod \"collect-profiles-29466105-fr8d5\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.270326 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f27wh\" (UniqueName: \"kubernetes.io/projected/305e4023-ac44-4a22-ba43-2a2f67441647-kube-api-access-f27wh\") pod \"collect-profiles-29466105-fr8d5\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.271253 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305e4023-ac44-4a22-ba43-2a2f67441647-config-volume\") pod \"collect-profiles-29466105-fr8d5\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.279267 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/305e4023-ac44-4a22-ba43-2a2f67441647-secret-volume\") pod \"collect-profiles-29466105-fr8d5\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.293986 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f27wh\" (UniqueName: \"kubernetes.io/projected/305e4023-ac44-4a22-ba43-2a2f67441647-kube-api-access-f27wh\") pod \"collect-profiles-29466105-fr8d5\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.486696 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:00 crc kubenswrapper[4919]: I0109 13:45:00.904347 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5"] Jan 09 13:45:00 crc kubenswrapper[4919]: W0109 13:45:00.905713 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod305e4023_ac44_4a22_ba43_2a2f67441647.slice/crio-9f6b2d70d8ac5fde4074057dadfa7a9a1c4cc432f424dd4f1c0681e03fa724ad WatchSource:0}: Error finding container 9f6b2d70d8ac5fde4074057dadfa7a9a1c4cc432f424dd4f1c0681e03fa724ad: Status 404 returned error can't find the container with id 9f6b2d70d8ac5fde4074057dadfa7a9a1c4cc432f424dd4f1c0681e03fa724ad Jan 09 13:45:01 crc kubenswrapper[4919]: I0109 13:45:01.361665 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" event={"ID":"305e4023-ac44-4a22-ba43-2a2f67441647","Type":"ContainerStarted","Data":"9f6b2d70d8ac5fde4074057dadfa7a9a1c4cc432f424dd4f1c0681e03fa724ad"} Jan 09 13:45:02 crc kubenswrapper[4919]: I0109 13:45:02.368465 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" event={"ID":"305e4023-ac44-4a22-ba43-2a2f67441647","Type":"ContainerStarted","Data":"6f83f4828d0f79a2651d085732b4b5f0608bf0228c77173f6cac6cf323e4a36e"} Jan 09 13:45:02 crc kubenswrapper[4919]: I0109 13:45:02.387379 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" podStartSLOduration=2.387363047 podStartE2EDuration="2.387363047s" podCreationTimestamp="2026-01-09 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:45:02.385579373 +0000 UTC m=+881.933418833" watchObservedRunningTime="2026-01-09 13:45:02.387363047 +0000 UTC m=+881.935202497" Jan 09 13:45:02 crc kubenswrapper[4919]: I0109 13:45:02.761762 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6kcvb" Jan 09 13:45:03 crc kubenswrapper[4919]: I0109 13:45:03.375860 4919 generic.go:334] "Generic (PLEG): container finished" podID="305e4023-ac44-4a22-ba43-2a2f67441647" containerID="6f83f4828d0f79a2651d085732b4b5f0608bf0228c77173f6cac6cf323e4a36e" exitCode=0 Jan 09 13:45:03 crc kubenswrapper[4919]: I0109 13:45:03.375935 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" event={"ID":"305e4023-ac44-4a22-ba43-2a2f67441647","Type":"ContainerDied","Data":"6f83f4828d0f79a2651d085732b4b5f0608bf0228c77173f6cac6cf323e4a36e"} Jan 09 13:45:04 crc kubenswrapper[4919]: I0109 13:45:04.653491 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:04 crc kubenswrapper[4919]: I0109 13:45:04.832419 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305e4023-ac44-4a22-ba43-2a2f67441647-config-volume\") pod \"305e4023-ac44-4a22-ba43-2a2f67441647\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " Jan 09 13:45:04 crc kubenswrapper[4919]: I0109 13:45:04.832781 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/305e4023-ac44-4a22-ba43-2a2f67441647-secret-volume\") pod \"305e4023-ac44-4a22-ba43-2a2f67441647\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " Jan 09 13:45:04 crc kubenswrapper[4919]: I0109 13:45:04.832809 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f27wh\" (UniqueName: \"kubernetes.io/projected/305e4023-ac44-4a22-ba43-2a2f67441647-kube-api-access-f27wh\") pod \"305e4023-ac44-4a22-ba43-2a2f67441647\" (UID: \"305e4023-ac44-4a22-ba43-2a2f67441647\") " Jan 09 13:45:04 crc kubenswrapper[4919]: I0109 13:45:04.833373 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/305e4023-ac44-4a22-ba43-2a2f67441647-config-volume" (OuterVolumeSpecName: "config-volume") pod "305e4023-ac44-4a22-ba43-2a2f67441647" (UID: "305e4023-ac44-4a22-ba43-2a2f67441647"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:45:04 crc kubenswrapper[4919]: I0109 13:45:04.837496 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305e4023-ac44-4a22-ba43-2a2f67441647-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "305e4023-ac44-4a22-ba43-2a2f67441647" (UID: "305e4023-ac44-4a22-ba43-2a2f67441647"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:45:04 crc kubenswrapper[4919]: I0109 13:45:04.837637 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305e4023-ac44-4a22-ba43-2a2f67441647-kube-api-access-f27wh" (OuterVolumeSpecName: "kube-api-access-f27wh") pod "305e4023-ac44-4a22-ba43-2a2f67441647" (UID: "305e4023-ac44-4a22-ba43-2a2f67441647"). InnerVolumeSpecName "kube-api-access-f27wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:45:04 crc kubenswrapper[4919]: I0109 13:45:04.933893 4919 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/305e4023-ac44-4a22-ba43-2a2f67441647-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 13:45:04 crc kubenswrapper[4919]: I0109 13:45:04.933928 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f27wh\" (UniqueName: \"kubernetes.io/projected/305e4023-ac44-4a22-ba43-2a2f67441647-kube-api-access-f27wh\") on node \"crc\" DevicePath \"\"" Jan 09 13:45:04 crc kubenswrapper[4919]: I0109 13:45:04.933938 4919 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305e4023-ac44-4a22-ba43-2a2f67441647-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.389937 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" event={"ID":"305e4023-ac44-4a22-ba43-2a2f67441647","Type":"ContainerDied","Data":"9f6b2d70d8ac5fde4074057dadfa7a9a1c4cc432f424dd4f1c0681e03fa724ad"} Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.389980 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f6b2d70d8ac5fde4074057dadfa7a9a1c4cc432f424dd4f1c0681e03fa724ad" Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.390114 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5" Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.964555 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-m2m9b"] Jan 09 13:45:05 crc kubenswrapper[4919]: E0109 13:45:05.964893 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305e4023-ac44-4a22-ba43-2a2f67441647" containerName="collect-profiles" Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.964913 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="305e4023-ac44-4a22-ba43-2a2f67441647" containerName="collect-profiles" Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.965041 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="305e4023-ac44-4a22-ba43-2a2f67441647" containerName="collect-profiles" Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.965572 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m2m9b" Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.974063 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.974536 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cjwf9" Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.977836 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 09 13:45:05 crc kubenswrapper[4919]: I0109 13:45:05.984394 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m2m9b"] Jan 09 13:45:06 crc kubenswrapper[4919]: I0109 13:45:06.048098 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wmr8\" (UniqueName: \"kubernetes.io/projected/d1d17794-0bd7-4bf5-bcfa-b7909346182f-kube-api-access-9wmr8\") pod \"openstack-operator-index-m2m9b\" (UID: \"d1d17794-0bd7-4bf5-bcfa-b7909346182f\") " pod="openstack-operators/openstack-operator-index-m2m9b" Jan 09 13:45:06 crc kubenswrapper[4919]: I0109 13:45:06.148889 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wmr8\" (UniqueName: \"kubernetes.io/projected/d1d17794-0bd7-4bf5-bcfa-b7909346182f-kube-api-access-9wmr8\") pod \"openstack-operator-index-m2m9b\" (UID: \"d1d17794-0bd7-4bf5-bcfa-b7909346182f\") " pod="openstack-operators/openstack-operator-index-m2m9b" Jan 09 13:45:06 crc kubenswrapper[4919]: I0109 13:45:06.165484 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wmr8\" (UniqueName: \"kubernetes.io/projected/d1d17794-0bd7-4bf5-bcfa-b7909346182f-kube-api-access-9wmr8\") pod \"openstack-operator-index-m2m9b\" (UID: \"d1d17794-0bd7-4bf5-bcfa-b7909346182f\") " pod="openstack-operators/openstack-operator-index-m2m9b" Jan 09 13:45:06 crc kubenswrapper[4919]: I0109 13:45:06.286693 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m2m9b" Jan 09 13:45:06 crc kubenswrapper[4919]: I0109 13:45:06.728331 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m2m9b"] Jan 09 13:45:06 crc kubenswrapper[4919]: W0109 13:45:06.731130 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1d17794_0bd7_4bf5_bcfa_b7909346182f.slice/crio-3bff59acd9a44787a44c4a818fa3ecdc362f3997c8c0df5e574d4dddf39281eb WatchSource:0}: Error finding container 3bff59acd9a44787a44c4a818fa3ecdc362f3997c8c0df5e574d4dddf39281eb: Status 404 returned error can't find the container with id 3bff59acd9a44787a44c4a818fa3ecdc362f3997c8c0df5e574d4dddf39281eb Jan 09 13:45:07 crc kubenswrapper[4919]: I0109 13:45:07.408304 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m2m9b" event={"ID":"d1d17794-0bd7-4bf5-bcfa-b7909346182f","Type":"ContainerStarted","Data":"3bff59acd9a44787a44c4a818fa3ecdc362f3997c8c0df5e574d4dddf39281eb"} Jan 09 13:45:09 crc kubenswrapper[4919]: I0109 13:45:09.419241 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m2m9b" event={"ID":"d1d17794-0bd7-4bf5-bcfa-b7909346182f","Type":"ContainerStarted","Data":"a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d"} Jan 09 13:45:09 crc kubenswrapper[4919]: I0109 13:45:09.437330 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-m2m9b" podStartSLOduration=2.737866446 podStartE2EDuration="4.437309087s" podCreationTimestamp="2026-01-09 13:45:05 +0000 UTC" firstStartedPulling="2026-01-09 13:45:06.734542194 +0000 UTC m=+886.282381644" lastFinishedPulling="2026-01-09 13:45:08.433984835 +0000 UTC m=+887.981824285" observedRunningTime="2026-01-09 13:45:09.433694238 +0000 UTC m=+888.981533688" watchObservedRunningTime="2026-01-09 13:45:09.437309087 +0000 UTC m=+888.985148547" Jan 09 13:45:09 crc kubenswrapper[4919]: I0109 13:45:09.624529 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fwdhc" Jan 09 13:45:10 crc kubenswrapper[4919]: I0109 13:45:10.335905 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-m2m9b"] Jan 09 13:45:10 crc kubenswrapper[4919]: I0109 13:45:10.937935 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6rw4t"] Jan 09 13:45:10 crc kubenswrapper[4919]: I0109 13:45:10.938930 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6rw4t" Jan 09 13:45:10 crc kubenswrapper[4919]: I0109 13:45:10.949349 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6rw4t"] Jan 09 13:45:11 crc kubenswrapper[4919]: I0109 13:45:11.115846 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msz75\" (UniqueName: \"kubernetes.io/projected/937fd694-383a-4377-a061-2c3711482e98-kube-api-access-msz75\") pod \"openstack-operator-index-6rw4t\" (UID: \"937fd694-383a-4377-a061-2c3711482e98\") " pod="openstack-operators/openstack-operator-index-6rw4t" Jan 09 13:45:11 crc kubenswrapper[4919]: I0109 13:45:11.216987 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msz75\" (UniqueName: \"kubernetes.io/projected/937fd694-383a-4377-a061-2c3711482e98-kube-api-access-msz75\") pod \"openstack-operator-index-6rw4t\" (UID: \"937fd694-383a-4377-a061-2c3711482e98\") " pod="openstack-operators/openstack-operator-index-6rw4t" Jan 09 13:45:11 crc kubenswrapper[4919]: I0109 13:45:11.238415 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msz75\" (UniqueName: \"kubernetes.io/projected/937fd694-383a-4377-a061-2c3711482e98-kube-api-access-msz75\") pod \"openstack-operator-index-6rw4t\" (UID: \"937fd694-383a-4377-a061-2c3711482e98\") " pod="openstack-operators/openstack-operator-index-6rw4t" Jan 09 13:45:11 crc kubenswrapper[4919]: I0109 13:45:11.267043 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6rw4t" Jan 09 13:45:11 crc kubenswrapper[4919]: I0109 13:45:11.429411 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-m2m9b" podUID="d1d17794-0bd7-4bf5-bcfa-b7909346182f" containerName="registry-server" containerID="cri-o://a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d" gracePeriod=2 Jan 09 13:45:11 crc kubenswrapper[4919]: E0109 13:45:11.549749 4919 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1d17794_0bd7_4bf5_bcfa_b7909346182f.slice/crio-a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1d17794_0bd7_4bf5_bcfa_b7909346182f.slice/crio-conmon-a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d.scope\": RecentStats: unable to find data in memory cache]" Jan 09 13:45:11 crc kubenswrapper[4919]: I0109 13:45:11.666610 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6rw4t"] Jan 09 13:45:11 crc kubenswrapper[4919]: W0109 13:45:11.671385 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod937fd694_383a_4377_a061_2c3711482e98.slice/crio-f6360d28c145eb8a37a675b030ec15d34ca51a59996ff6ad952e301ce8c0954c WatchSource:0}: Error finding container f6360d28c145eb8a37a675b030ec15d34ca51a59996ff6ad952e301ce8c0954c: Status 404 returned error can't find the container with id f6360d28c145eb8a37a675b030ec15d34ca51a59996ff6ad952e301ce8c0954c Jan 09 13:45:11 crc kubenswrapper[4919]: I0109 13:45:11.760126 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m2m9b" Jan 09 13:45:11 crc kubenswrapper[4919]: I0109 13:45:11.824162 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wmr8\" (UniqueName: \"kubernetes.io/projected/d1d17794-0bd7-4bf5-bcfa-b7909346182f-kube-api-access-9wmr8\") pod \"d1d17794-0bd7-4bf5-bcfa-b7909346182f\" (UID: \"d1d17794-0bd7-4bf5-bcfa-b7909346182f\") " Jan 09 13:45:11 crc kubenswrapper[4919]: I0109 13:45:11.828558 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d17794-0bd7-4bf5-bcfa-b7909346182f-kube-api-access-9wmr8" (OuterVolumeSpecName: "kube-api-access-9wmr8") pod "d1d17794-0bd7-4bf5-bcfa-b7909346182f" (UID: "d1d17794-0bd7-4bf5-bcfa-b7909346182f"). InnerVolumeSpecName "kube-api-access-9wmr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:45:11 crc kubenswrapper[4919]: I0109 13:45:11.925517 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wmr8\" (UniqueName: \"kubernetes.io/projected/d1d17794-0bd7-4bf5-bcfa-b7909346182f-kube-api-access-9wmr8\") on node \"crc\" DevicePath \"\"" Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.436020 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6rw4t" event={"ID":"937fd694-383a-4377-a061-2c3711482e98","Type":"ContainerStarted","Data":"4dcbe85b9fb5e51366d3a2cebb3a5534954634943dd597e485ac1585dde409f2"} Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.436063 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6rw4t" event={"ID":"937fd694-383a-4377-a061-2c3711482e98","Type":"ContainerStarted","Data":"f6360d28c145eb8a37a675b030ec15d34ca51a59996ff6ad952e301ce8c0954c"} Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.439484 4919 generic.go:334] "Generic (PLEG): container finished" podID="d1d17794-0bd7-4bf5-bcfa-b7909346182f" containerID="a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d" exitCode=0 Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.439524 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m2m9b" Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.439532 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m2m9b" event={"ID":"d1d17794-0bd7-4bf5-bcfa-b7909346182f","Type":"ContainerDied","Data":"a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d"} Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.439702 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m2m9b" event={"ID":"d1d17794-0bd7-4bf5-bcfa-b7909346182f","Type":"ContainerDied","Data":"3bff59acd9a44787a44c4a818fa3ecdc362f3997c8c0df5e574d4dddf39281eb"} Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.439727 4919 scope.go:117] "RemoveContainer" containerID="a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d" Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.450749 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6rw4t" podStartSLOduration=1.989566344 podStartE2EDuration="2.450730768s" podCreationTimestamp="2026-01-09 13:45:10 +0000 UTC" firstStartedPulling="2026-01-09 13:45:11.688689975 +0000 UTC m=+891.236529425" lastFinishedPulling="2026-01-09 13:45:12.149854389 +0000 UTC m=+891.697693849" observedRunningTime="2026-01-09 13:45:12.449858297 +0000 UTC m=+891.997697747" watchObservedRunningTime="2026-01-09 13:45:12.450730768 +0000 UTC m=+891.998570218" Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.461621 4919 scope.go:117] "RemoveContainer" containerID="a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d" Jan 09 13:45:12 crc kubenswrapper[4919]: E0109 13:45:12.461991 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d\": container with ID starting with a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d not found: ID does not exist" containerID="a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d" Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.462018 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d"} err="failed to get container status \"a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d\": rpc error: code = NotFound desc = could not find container \"a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d\": container with ID starting with a475bbf68039b743ea02d8993ca7ad12384e840c7aca226a96860b13ba1b128d not found: ID does not exist" Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.476572 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-m2m9b"] Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.480976 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-m2m9b"] Jan 09 13:45:12 crc kubenswrapper[4919]: I0109 13:45:12.763508 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d17794-0bd7-4bf5-bcfa-b7909346182f" path="/var/lib/kubelet/pods/d1d17794-0bd7-4bf5-bcfa-b7909346182f/volumes" Jan 09 13:45:21 crc kubenswrapper[4919]: I0109 13:45:21.267480 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6rw4t" Jan 09 13:45:21 crc kubenswrapper[4919]: I0109 13:45:21.268054 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6rw4t" Jan 09 13:45:21 crc kubenswrapper[4919]: I0109 13:45:21.296080 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6rw4t" Jan 09 13:45:21 crc kubenswrapper[4919]: I0109 13:45:21.523096 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6rw4t" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.299249 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n"] Jan 09 13:45:26 crc kubenswrapper[4919]: E0109 13:45:26.299753 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d17794-0bd7-4bf5-bcfa-b7909346182f" containerName="registry-server" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.299765 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d17794-0bd7-4bf5-bcfa-b7909346182f" containerName="registry-server" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.299877 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d17794-0bd7-4bf5-bcfa-b7909346182f" containerName="registry-server" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.300703 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.302585 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7tdc8" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.312101 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n"] Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.332069 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-bundle\") pod \"6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.332115 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-util\") pod \"6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.332255 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kktjl\" (UniqueName: \"kubernetes.io/projected/b32f9373-7a38-42ed-8071-92865685e246-kube-api-access-kktjl\") pod \"6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.433758 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-bundle\") pod \"6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.433801 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-util\") pod \"6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.433857 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kktjl\" (UniqueName: \"kubernetes.io/projected/b32f9373-7a38-42ed-8071-92865685e246-kube-api-access-kktjl\") pod \"6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.434251 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-bundle\") pod \"6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.434299 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-util\") pod \"6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.453092 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kktjl\" (UniqueName: \"kubernetes.io/projected/b32f9373-7a38-42ed-8071-92865685e246-kube-api-access-kktjl\") pod \"6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:26 crc kubenswrapper[4919]: I0109 13:45:26.618383 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:27 crc kubenswrapper[4919]: I0109 13:45:27.099842 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n"] Jan 09 13:45:27 crc kubenswrapper[4919]: W0109 13:45:27.102324 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb32f9373_7a38_42ed_8071_92865685e246.slice/crio-75328b0ef1e22dcc6409ae15d5a5d2b9c365853f79d9d0c8f8a3aec7adc39ac6 WatchSource:0}: Error finding container 75328b0ef1e22dcc6409ae15d5a5d2b9c365853f79d9d0c8f8a3aec7adc39ac6: Status 404 returned error can't find the container with id 75328b0ef1e22dcc6409ae15d5a5d2b9c365853f79d9d0c8f8a3aec7adc39ac6 Jan 09 13:45:27 crc kubenswrapper[4919]: I0109 13:45:27.529580 4919 generic.go:334] "Generic (PLEG): container finished" podID="b32f9373-7a38-42ed-8071-92865685e246" containerID="bee5adada4e6a6a3edc48836dcf5a35a4706051ff7ce04ece675180d099b811d" exitCode=0 Jan 09 13:45:27 crc kubenswrapper[4919]: I0109 13:45:27.529627 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" event={"ID":"b32f9373-7a38-42ed-8071-92865685e246","Type":"ContainerDied","Data":"bee5adada4e6a6a3edc48836dcf5a35a4706051ff7ce04ece675180d099b811d"} Jan 09 13:45:27 crc kubenswrapper[4919]: I0109 13:45:27.529651 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" event={"ID":"b32f9373-7a38-42ed-8071-92865685e246","Type":"ContainerStarted","Data":"75328b0ef1e22dcc6409ae15d5a5d2b9c365853f79d9d0c8f8a3aec7adc39ac6"} Jan 09 13:45:29 crc kubenswrapper[4919]: I0109 13:45:29.548321 4919 generic.go:334] "Generic (PLEG): container finished" podID="b32f9373-7a38-42ed-8071-92865685e246" containerID="4287ca8916f2f50051b26e49266021717f32d76f96d39669a7e2c321014928a4" exitCode=0 Jan 09 13:45:29 crc kubenswrapper[4919]: I0109 13:45:29.548507 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" event={"ID":"b32f9373-7a38-42ed-8071-92865685e246","Type":"ContainerDied","Data":"4287ca8916f2f50051b26e49266021717f32d76f96d39669a7e2c321014928a4"} Jan 09 13:45:30 crc kubenswrapper[4919]: I0109 13:45:30.556823 4919 generic.go:334] "Generic (PLEG): container finished" podID="b32f9373-7a38-42ed-8071-92865685e246" containerID="f9a39cc631868cd679f534937085e7c2bd740f52bdb8f81adaa58b8487008592" exitCode=0 Jan 09 13:45:30 crc kubenswrapper[4919]: I0109 13:45:30.556951 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" event={"ID":"b32f9373-7a38-42ed-8071-92865685e246","Type":"ContainerDied","Data":"f9a39cc631868cd679f534937085e7c2bd740f52bdb8f81adaa58b8487008592"} Jan 09 13:45:31 crc kubenswrapper[4919]: I0109 13:45:31.854424 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.039858 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-util\") pod \"b32f9373-7a38-42ed-8071-92865685e246\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.040021 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-bundle\") pod \"b32f9373-7a38-42ed-8071-92865685e246\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.040074 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kktjl\" (UniqueName: \"kubernetes.io/projected/b32f9373-7a38-42ed-8071-92865685e246-kube-api-access-kktjl\") pod \"b32f9373-7a38-42ed-8071-92865685e246\" (UID: \"b32f9373-7a38-42ed-8071-92865685e246\") " Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.040590 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-bundle" (OuterVolumeSpecName: "bundle") pod "b32f9373-7a38-42ed-8071-92865685e246" (UID: "b32f9373-7a38-42ed-8071-92865685e246"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.053815 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b32f9373-7a38-42ed-8071-92865685e246-kube-api-access-kktjl" (OuterVolumeSpecName: "kube-api-access-kktjl") pod "b32f9373-7a38-42ed-8071-92865685e246" (UID: "b32f9373-7a38-42ed-8071-92865685e246"). InnerVolumeSpecName "kube-api-access-kktjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.058952 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-util" (OuterVolumeSpecName: "util") pod "b32f9373-7a38-42ed-8071-92865685e246" (UID: "b32f9373-7a38-42ed-8071-92865685e246"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.141782 4919 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.141813 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kktjl\" (UniqueName: \"kubernetes.io/projected/b32f9373-7a38-42ed-8071-92865685e246-kube-api-access-kktjl\") on node \"crc\" DevicePath \"\"" Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.141823 4919 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b32f9373-7a38-42ed-8071-92865685e246-util\") on node \"crc\" DevicePath \"\"" Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.571479 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" event={"ID":"b32f9373-7a38-42ed-8071-92865685e246","Type":"ContainerDied","Data":"75328b0ef1e22dcc6409ae15d5a5d2b9c365853f79d9d0c8f8a3aec7adc39ac6"} Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.571743 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75328b0ef1e22dcc6409ae15d5a5d2b9c365853f79d9d0c8f8a3aec7adc39ac6" Jan 09 13:45:32 crc kubenswrapper[4919]: I0109 13:45:32.571658 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.465360 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h"] Jan 09 13:45:38 crc kubenswrapper[4919]: E0109 13:45:38.466070 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32f9373-7a38-42ed-8071-92865685e246" containerName="util" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.466091 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32f9373-7a38-42ed-8071-92865685e246" containerName="util" Jan 09 13:45:38 crc kubenswrapper[4919]: E0109 13:45:38.466120 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32f9373-7a38-42ed-8071-92865685e246" containerName="extract" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.466132 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32f9373-7a38-42ed-8071-92865685e246" containerName="extract" Jan 09 13:45:38 crc kubenswrapper[4919]: E0109 13:45:38.466148 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32f9373-7a38-42ed-8071-92865685e246" containerName="pull" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.466161 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32f9373-7a38-42ed-8071-92865685e246" containerName="pull" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.466415 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="b32f9373-7a38-42ed-8071-92865685e246" containerName="extract" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.467262 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.469843 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-5mzw6" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.489163 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h"] Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.572754 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf7l8\" (UniqueName: \"kubernetes.io/projected/2ebbd42e-c3b8-4e1c-b4ee-bf9316669667-kube-api-access-mf7l8\") pod \"openstack-operator-controller-operator-6954755664-nmm8h\" (UID: \"2ebbd42e-c3b8-4e1c-b4ee-bf9316669667\") " pod="openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.673877 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf7l8\" (UniqueName: \"kubernetes.io/projected/2ebbd42e-c3b8-4e1c-b4ee-bf9316669667-kube-api-access-mf7l8\") pod \"openstack-operator-controller-operator-6954755664-nmm8h\" (UID: \"2ebbd42e-c3b8-4e1c-b4ee-bf9316669667\") " pod="openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.691673 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf7l8\" (UniqueName: \"kubernetes.io/projected/2ebbd42e-c3b8-4e1c-b4ee-bf9316669667-kube-api-access-mf7l8\") pod \"openstack-operator-controller-operator-6954755664-nmm8h\" (UID: \"2ebbd42e-c3b8-4e1c-b4ee-bf9316669667\") " pod="openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h" Jan 09 13:45:38 crc kubenswrapper[4919]: I0109 13:45:38.792634 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h" Jan 09 13:45:39 crc kubenswrapper[4919]: I0109 13:45:39.055596 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h"] Jan 09 13:45:39 crc kubenswrapper[4919]: W0109 13:45:39.061496 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ebbd42e_c3b8_4e1c_b4ee_bf9316669667.slice/crio-ea88c38e88a5bba76629901dc3361a2c6c4714be61453b16e61f4be37f17dcda WatchSource:0}: Error finding container ea88c38e88a5bba76629901dc3361a2c6c4714be61453b16e61f4be37f17dcda: Status 404 returned error can't find the container with id ea88c38e88a5bba76629901dc3361a2c6c4714be61453b16e61f4be37f17dcda Jan 09 13:45:39 crc kubenswrapper[4919]: I0109 13:45:39.609924 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h" event={"ID":"2ebbd42e-c3b8-4e1c-b4ee-bf9316669667","Type":"ContainerStarted","Data":"ea88c38e88a5bba76629901dc3361a2c6c4714be61453b16e61f4be37f17dcda"} Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.562665 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7mntl"] Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.565136 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.583675 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7mntl"] Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.653786 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnt5h\" (UniqueName: \"kubernetes.io/projected/970261c2-d536-4bd0-b290-b6e18124c036-kube-api-access-nnt5h\") pod \"certified-operators-7mntl\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.654092 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-catalog-content\") pod \"certified-operators-7mntl\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.654133 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-utilities\") pod \"certified-operators-7mntl\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.756478 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnt5h\" (UniqueName: \"kubernetes.io/projected/970261c2-d536-4bd0-b290-b6e18124c036-kube-api-access-nnt5h\") pod \"certified-operators-7mntl\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.756528 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-catalog-content\") pod \"certified-operators-7mntl\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.756553 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-utilities\") pod \"certified-operators-7mntl\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.757023 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-catalog-content\") pod \"certified-operators-7mntl\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.757109 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-utilities\") pod \"certified-operators-7mntl\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.784570 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnt5h\" (UniqueName: \"kubernetes.io/projected/970261c2-d536-4bd0-b290-b6e18124c036-kube-api-access-nnt5h\") pod \"certified-operators-7mntl\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:43 crc kubenswrapper[4919]: I0109 13:45:43.889499 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:45:50 crc kubenswrapper[4919]: I0109 13:45:50.744850 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7mntl"] Jan 09 13:45:50 crc kubenswrapper[4919]: W0109 13:45:50.757929 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod970261c2_d536_4bd0_b290_b6e18124c036.slice/crio-8f8301cbc61e30bb801011f0618fb0612152fe5d26590aa655296cf3fb5fb967 WatchSource:0}: Error finding container 8f8301cbc61e30bb801011f0618fb0612152fe5d26590aa655296cf3fb5fb967: Status 404 returned error can't find the container with id 8f8301cbc61e30bb801011f0618fb0612152fe5d26590aa655296cf3fb5fb967 Jan 09 13:45:51 crc kubenswrapper[4919]: I0109 13:45:51.246872 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:45:51 crc kubenswrapper[4919]: I0109 13:45:51.246932 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:45:51 crc kubenswrapper[4919]: I0109 13:45:51.696814 4919 generic.go:334] "Generic (PLEG): container finished" podID="970261c2-d536-4bd0-b290-b6e18124c036" containerID="630acdd06024b9ce6b6e11cdfade656d0c4a48f05d761efc339386bdc368e5ad" exitCode=0 Jan 09 13:45:51 crc kubenswrapper[4919]: I0109 13:45:51.697011 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mntl" event={"ID":"970261c2-d536-4bd0-b290-b6e18124c036","Type":"ContainerDied","Data":"630acdd06024b9ce6b6e11cdfade656d0c4a48f05d761efc339386bdc368e5ad"} Jan 09 13:45:51 crc kubenswrapper[4919]: I0109 13:45:51.697072 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mntl" event={"ID":"970261c2-d536-4bd0-b290-b6e18124c036","Type":"ContainerStarted","Data":"8f8301cbc61e30bb801011f0618fb0612152fe5d26590aa655296cf3fb5fb967"} Jan 09 13:45:51 crc kubenswrapper[4919]: I0109 13:45:51.698793 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h" event={"ID":"2ebbd42e-c3b8-4e1c-b4ee-bf9316669667","Type":"ContainerStarted","Data":"af3a9b278bd064dd0f750810ff6f6e7af4b57b312d7282be8617b0b0f488efd8"} Jan 09 13:45:51 crc kubenswrapper[4919]: I0109 13:45:51.700291 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h" Jan 09 13:45:51 crc kubenswrapper[4919]: I0109 13:45:51.778257 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h" podStartSLOduration=2.285207172 podStartE2EDuration="13.778237897s" podCreationTimestamp="2026-01-09 13:45:38 +0000 UTC" firstStartedPulling="2026-01-09 13:45:39.066297009 +0000 UTC m=+918.614136459" lastFinishedPulling="2026-01-09 13:45:50.559327734 +0000 UTC m=+930.107167184" observedRunningTime="2026-01-09 13:45:51.770643961 +0000 UTC m=+931.318483411" watchObservedRunningTime="2026-01-09 13:45:51.778237897 +0000 UTC m=+931.326077347" Jan 09 13:45:53 crc kubenswrapper[4919]: I0109 13:45:53.713379 4919 generic.go:334] "Generic (PLEG): container finished" podID="970261c2-d536-4bd0-b290-b6e18124c036" containerID="0e0123729c1773c9377b15d663c8327c2cd6f40e8945c4992399ff056e958df3" exitCode=0 Jan 09 13:45:53 crc kubenswrapper[4919]: I0109 13:45:53.713503 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mntl" event={"ID":"970261c2-d536-4bd0-b290-b6e18124c036","Type":"ContainerDied","Data":"0e0123729c1773c9377b15d663c8327c2cd6f40e8945c4992399ff056e958df3"} Jan 09 13:45:54 crc kubenswrapper[4919]: I0109 13:45:54.743959 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mntl" event={"ID":"970261c2-d536-4bd0-b290-b6e18124c036","Type":"ContainerStarted","Data":"b65f9cdde05391eef3c759b4e8440ebe1db7a0c90e8aa89f3cc30590f89070f9"} Jan 09 13:45:54 crc kubenswrapper[4919]: I0109 13:45:54.764805 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7mntl" podStartSLOduration=9.276452959 podStartE2EDuration="11.76478774s" podCreationTimestamp="2026-01-09 13:45:43 +0000 UTC" firstStartedPulling="2026-01-09 13:45:51.699947889 +0000 UTC m=+931.247787339" lastFinishedPulling="2026-01-09 13:45:54.18828267 +0000 UTC m=+933.736122120" observedRunningTime="2026-01-09 13:45:54.761478049 +0000 UTC m=+934.309317509" watchObservedRunningTime="2026-01-09 13:45:54.76478774 +0000 UTC m=+934.312627190" Jan 09 13:45:58 crc kubenswrapper[4919]: I0109 13:45:58.795795 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6954755664-nmm8h" Jan 09 13:46:03 crc kubenswrapper[4919]: I0109 13:46:03.890412 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:46:03 crc kubenswrapper[4919]: I0109 13:46:03.890770 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:46:04 crc kubenswrapper[4919]: I0109 13:46:04.122506 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:46:04 crc kubenswrapper[4919]: I0109 13:46:04.859183 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:46:04 crc kubenswrapper[4919]: I0109 13:46:04.904713 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7mntl"] Jan 09 13:46:06 crc kubenswrapper[4919]: I0109 13:46:06.815099 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7mntl" podUID="970261c2-d536-4bd0-b290-b6e18124c036" containerName="registry-server" containerID="cri-o://b65f9cdde05391eef3c759b4e8440ebe1db7a0c90e8aa89f3cc30590f89070f9" gracePeriod=2 Jan 09 13:46:08 crc kubenswrapper[4919]: I0109 13:46:08.835929 4919 generic.go:334] "Generic (PLEG): container finished" podID="970261c2-d536-4bd0-b290-b6e18124c036" containerID="b65f9cdde05391eef3c759b4e8440ebe1db7a0c90e8aa89f3cc30590f89070f9" exitCode=0 Jan 09 13:46:08 crc kubenswrapper[4919]: I0109 13:46:08.836027 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mntl" event={"ID":"970261c2-d536-4bd0-b290-b6e18124c036","Type":"ContainerDied","Data":"b65f9cdde05391eef3c759b4e8440ebe1db7a0c90e8aa89f3cc30590f89070f9"} Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.111936 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.143173 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-utilities\") pod \"970261c2-d536-4bd0-b290-b6e18124c036\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.143357 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnt5h\" (UniqueName: \"kubernetes.io/projected/970261c2-d536-4bd0-b290-b6e18124c036-kube-api-access-nnt5h\") pod \"970261c2-d536-4bd0-b290-b6e18124c036\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.143401 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-catalog-content\") pod \"970261c2-d536-4bd0-b290-b6e18124c036\" (UID: \"970261c2-d536-4bd0-b290-b6e18124c036\") " Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.144086 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-utilities" (OuterVolumeSpecName: "utilities") pod "970261c2-d536-4bd0-b290-b6e18124c036" (UID: "970261c2-d536-4bd0-b290-b6e18124c036"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.148129 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970261c2-d536-4bd0-b290-b6e18124c036-kube-api-access-nnt5h" (OuterVolumeSpecName: "kube-api-access-nnt5h") pod "970261c2-d536-4bd0-b290-b6e18124c036" (UID: "970261c2-d536-4bd0-b290-b6e18124c036"). InnerVolumeSpecName "kube-api-access-nnt5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.184676 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "970261c2-d536-4bd0-b290-b6e18124c036" (UID: "970261c2-d536-4bd0-b290-b6e18124c036"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.244146 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.244183 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnt5h\" (UniqueName: \"kubernetes.io/projected/970261c2-d536-4bd0-b290-b6e18124c036-kube-api-access-nnt5h\") on node \"crc\" DevicePath \"\"" Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.244197 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/970261c2-d536-4bd0-b290-b6e18124c036-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.843761 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mntl" event={"ID":"970261c2-d536-4bd0-b290-b6e18124c036","Type":"ContainerDied","Data":"8f8301cbc61e30bb801011f0618fb0612152fe5d26590aa655296cf3fb5fb967"} Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.843808 4919 scope.go:117] "RemoveContainer" containerID="b65f9cdde05391eef3c759b4e8440ebe1db7a0c90e8aa89f3cc30590f89070f9" Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.843909 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mntl" Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.860630 4919 scope.go:117] "RemoveContainer" containerID="0e0123729c1773c9377b15d663c8327c2cd6f40e8945c4992399ff056e958df3" Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.870383 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7mntl"] Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.875479 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7mntl"] Jan 09 13:46:09 crc kubenswrapper[4919]: I0109 13:46:09.893189 4919 scope.go:117] "RemoveContainer" containerID="630acdd06024b9ce6b6e11cdfade656d0c4a48f05d761efc339386bdc368e5ad" Jan 09 13:46:10 crc kubenswrapper[4919]: I0109 13:46:10.760001 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970261c2-d536-4bd0-b290-b6e18124c036" path="/var/lib/kubelet/pods/970261c2-d536-4bd0-b290-b6e18124c036/volumes" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.016565 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p8wlj"] Jan 09 13:46:18 crc kubenswrapper[4919]: E0109 13:46:18.017547 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970261c2-d536-4bd0-b290-b6e18124c036" containerName="extract-content" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.017569 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="970261c2-d536-4bd0-b290-b6e18124c036" containerName="extract-content" Jan 09 13:46:18 crc kubenswrapper[4919]: E0109 13:46:18.017593 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970261c2-d536-4bd0-b290-b6e18124c036" containerName="registry-server" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.017604 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="970261c2-d536-4bd0-b290-b6e18124c036" containerName="registry-server" Jan 09 13:46:18 crc kubenswrapper[4919]: E0109 13:46:18.017620 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970261c2-d536-4bd0-b290-b6e18124c036" containerName="extract-utilities" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.017632 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="970261c2-d536-4bd0-b290-b6e18124c036" containerName="extract-utilities" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.017808 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="970261c2-d536-4bd0-b290-b6e18124c036" containerName="registry-server" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.019311 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.031060 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p8wlj"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.120939 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.121997 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.124329 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-94g8n" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.125677 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.126495 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.128022 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dxdd8" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.138031 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.142194 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.161566 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgnlp\" (UniqueName: \"kubernetes.io/projected/ceacd617-f87e-4765-9a75-9cde47b80e8d-kube-api-access-xgnlp\") pod \"community-operators-p8wlj\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.161615 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-catalog-content\") pod \"community-operators-p8wlj\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.161709 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-utilities\") pod \"community-operators-p8wlj\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.166776 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.167830 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.169500 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-hjfhj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.175463 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.176556 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.179410 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-mjkwm" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.185400 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.217530 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.235495 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.236337 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.239626 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zzjxc" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.250590 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.254021 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.261615 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.261832 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dpzlx" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.262905 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj4qd\" (UniqueName: \"kubernetes.io/projected/7635e70a-4259-4c43-91b7-eae6fc0d3c12-kube-api-access-kj4qd\") pod \"heat-operator-controller-manager-658dd65b86-vvsj9\" (UID: \"7635e70a-4259-4c43-91b7-eae6fc0d3c12\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.262939 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q79b\" (UniqueName: \"kubernetes.io/projected/60feaa4f-ca73-4e59-a85f-c17132f8f708-kube-api-access-7q79b\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-f2drg\" (UID: \"60feaa4f-ca73-4e59-a85f-c17132f8f708\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.262957 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnbn8\" (UniqueName: \"kubernetes.io/projected/b46937ef-2f83-4864-b0d4-5464ed82e1b8-kube-api-access-hnbn8\") pod \"designate-operator-controller-manager-66f8b87655-wxt2z\" (UID: \"b46937ef-2f83-4864-b0d4-5464ed82e1b8\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.262980 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64s9v\" (UniqueName: \"kubernetes.io/projected/d0081380-9d2e-40bb-8cc9-f124d4fbfd25-kube-api-access-64s9v\") pod \"barbican-operator-controller-manager-f6f74d6db-h6cp9\" (UID: \"d0081380-9d2e-40bb-8cc9-f124d4fbfd25\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.263010 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgnlp\" (UniqueName: \"kubernetes.io/projected/ceacd617-f87e-4765-9a75-9cde47b80e8d-kube-api-access-xgnlp\") pod \"community-operators-p8wlj\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.263037 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2k7b\" (UniqueName: \"kubernetes.io/projected/7716ced4-dfb9-4a5c-936f-65edbf78f5dd-kube-api-access-c2k7b\") pod \"glance-operator-controller-manager-7b549fc966-s46b7\" (UID: \"7716ced4-dfb9-4a5c-936f-65edbf78f5dd\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.263056 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-catalog-content\") pod \"community-operators-p8wlj\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.263074 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-utilities\") pod \"community-operators-p8wlj\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.263100 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4csgr\" (UniqueName: \"kubernetes.io/projected/276f41de-c875-40be-816a-84eb02212fda-kube-api-access-4csgr\") pod \"cinder-operator-controller-manager-78979fc445-m56bk\" (UID: \"276f41de-c875-40be-816a-84eb02212fda\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.264260 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-utilities\") pod \"community-operators-p8wlj\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.264579 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-catalog-content\") pod \"community-operators-p8wlj\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.276270 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.277042 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.280894 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.281291 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lgxt2" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.301489 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.312162 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgnlp\" (UniqueName: \"kubernetes.io/projected/ceacd617-f87e-4765-9a75-9cde47b80e8d-kube-api-access-xgnlp\") pod \"community-operators-p8wlj\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.333332 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.334133 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.340529 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.341055 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-r5j45"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.341531 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-47dtr" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.342152 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.346491 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-dpnlv" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.350415 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.360451 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.363998 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj4qd\" (UniqueName: \"kubernetes.io/projected/7635e70a-4259-4c43-91b7-eae6fc0d3c12-kube-api-access-kj4qd\") pod \"heat-operator-controller-manager-658dd65b86-vvsj9\" (UID: \"7635e70a-4259-4c43-91b7-eae6fc0d3c12\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.364047 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q79b\" (UniqueName: \"kubernetes.io/projected/60feaa4f-ca73-4e59-a85f-c17132f8f708-kube-api-access-7q79b\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-f2drg\" (UID: \"60feaa4f-ca73-4e59-a85f-c17132f8f708\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.364070 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnbn8\" (UniqueName: \"kubernetes.io/projected/b46937ef-2f83-4864-b0d4-5464ed82e1b8-kube-api-access-hnbn8\") pod \"designate-operator-controller-manager-66f8b87655-wxt2z\" (UID: \"b46937ef-2f83-4864-b0d4-5464ed82e1b8\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.364100 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64s9v\" (UniqueName: \"kubernetes.io/projected/d0081380-9d2e-40bb-8cc9-f124d4fbfd25-kube-api-access-64s9v\") pod \"barbican-operator-controller-manager-f6f74d6db-h6cp9\" (UID: \"d0081380-9d2e-40bb-8cc9-f124d4fbfd25\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.364143 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2k7b\" (UniqueName: \"kubernetes.io/projected/7716ced4-dfb9-4a5c-936f-65edbf78f5dd-kube-api-access-c2k7b\") pod \"glance-operator-controller-manager-7b549fc966-s46b7\" (UID: \"7716ced4-dfb9-4a5c-936f-65edbf78f5dd\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.364178 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4csgr\" (UniqueName: \"kubernetes.io/projected/276f41de-c875-40be-816a-84eb02212fda-kube-api-access-4csgr\") pod \"cinder-operator-controller-manager-78979fc445-m56bk\" (UID: \"276f41de-c875-40be-816a-84eb02212fda\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.383392 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-r5j45"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.440982 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4csgr\" (UniqueName: \"kubernetes.io/projected/276f41de-c875-40be-816a-84eb02212fda-kube-api-access-4csgr\") pod \"cinder-operator-controller-manager-78979fc445-m56bk\" (UID: \"276f41de-c875-40be-816a-84eb02212fda\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.441459 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj4qd\" (UniqueName: \"kubernetes.io/projected/7635e70a-4259-4c43-91b7-eae6fc0d3c12-kube-api-access-kj4qd\") pod \"heat-operator-controller-manager-658dd65b86-vvsj9\" (UID: \"7635e70a-4259-4c43-91b7-eae6fc0d3c12\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.442089 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64s9v\" (UniqueName: \"kubernetes.io/projected/d0081380-9d2e-40bb-8cc9-f124d4fbfd25-kube-api-access-64s9v\") pod \"barbican-operator-controller-manager-f6f74d6db-h6cp9\" (UID: \"d0081380-9d2e-40bb-8cc9-f124d4fbfd25\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.444089 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.444667 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.445474 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q79b\" (UniqueName: \"kubernetes.io/projected/60feaa4f-ca73-4e59-a85f-c17132f8f708-kube-api-access-7q79b\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-f2drg\" (UID: \"60feaa4f-ca73-4e59-a85f-c17132f8f708\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.450848 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.453391 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-5kc9m" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.454966 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.468709 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2k7b\" (UniqueName: \"kubernetes.io/projected/7716ced4-dfb9-4a5c-936f-65edbf78f5dd-kube-api-access-c2k7b\") pod \"glance-operator-controller-manager-7b549fc966-s46b7\" (UID: \"7716ced4-dfb9-4a5c-936f-65edbf78f5dd\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.469186 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnbn8\" (UniqueName: \"kubernetes.io/projected/b46937ef-2f83-4864-b0d4-5464ed82e1b8-kube-api-access-hnbn8\") pod \"designate-operator-controller-manager-66f8b87655-wxt2z\" (UID: \"b46937ef-2f83-4864-b0d4-5464ed82e1b8\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.470935 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2mwj\" (UniqueName: \"kubernetes.io/projected/4d08a973-3a9e-4098-95fd-d314d9f4e1af-kube-api-access-l2mwj\") pod \"ironic-operator-controller-manager-f99f54bc8-4r7j8\" (UID: \"4d08a973-3a9e-4098-95fd-d314d9f4e1af\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.470986 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.471031 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c6qv\" (UniqueName: \"kubernetes.io/projected/53cc8efc-85ec-4ddf-82c5-c1db01fe8120-kube-api-access-2c6qv\") pod \"manila-operator-controller-manager-598945d5b8-cd2dq\" (UID: \"53cc8efc-85ec-4ddf-82c5-c1db01fe8120\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.471057 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ws6c\" (UniqueName: \"kubernetes.io/projected/af1be546-436f-43ef-b748-22860362f61e-kube-api-access-4ws6c\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.471089 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j29l5\" (UniqueName: \"kubernetes.io/projected/33efa14f-00b9-49b4-bc2a-5c0c13d60613-kube-api-access-j29l5\") pod \"keystone-operator-controller-manager-568985c78-r5j45\" (UID: \"33efa14f-00b9-49b4-bc2a-5c0c13d60613\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.487410 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.488819 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.511256 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.512540 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-sbdxb" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.513628 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.535814 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.542282 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.554709 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.555675 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.556678 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.560577 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nwbb4" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.563981 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.565043 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.566723 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-59pzq" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.572366 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlwvr\" (UniqueName: \"kubernetes.io/projected/55fe5bfd-cc48-498b-88f7-789a3048a743-kube-api-access-xlwvr\") pod \"neutron-operator-controller-manager-7cd87b778f-jl5xm\" (UID: \"55fe5bfd-cc48-498b-88f7-789a3048a743\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.572421 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.572451 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c6qv\" (UniqueName: \"kubernetes.io/projected/53cc8efc-85ec-4ddf-82c5-c1db01fe8120-kube-api-access-2c6qv\") pod \"manila-operator-controller-manager-598945d5b8-cd2dq\" (UID: \"53cc8efc-85ec-4ddf-82c5-c1db01fe8120\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.572477 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ws6c\" (UniqueName: \"kubernetes.io/projected/af1be546-436f-43ef-b748-22860362f61e-kube-api-access-4ws6c\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.572504 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j29l5\" (UniqueName: \"kubernetes.io/projected/33efa14f-00b9-49b4-bc2a-5c0c13d60613-kube-api-access-j29l5\") pod \"keystone-operator-controller-manager-568985c78-r5j45\" (UID: \"33efa14f-00b9-49b4-bc2a-5c0c13d60613\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.572536 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69xjq\" (UniqueName: \"kubernetes.io/projected/19ebcfcf-3a6a-4c2c-ab15-2239e08bca09-kube-api-access-69xjq\") pod \"nova-operator-controller-manager-5fbbf8b6cc-jl878\" (UID: \"19ebcfcf-3a6a-4c2c-ab15-2239e08bca09\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.572561 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2mwj\" (UniqueName: \"kubernetes.io/projected/4d08a973-3a9e-4098-95fd-d314d9f4e1af-kube-api-access-l2mwj\") pod \"ironic-operator-controller-manager-f99f54bc8-4r7j8\" (UID: \"4d08a973-3a9e-4098-95fd-d314d9f4e1af\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.572587 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29hgx\" (UniqueName: \"kubernetes.io/projected/37ea4d3a-1d7d-47b2-8eee-1a7601c2de24-kube-api-access-29hgx\") pod \"mariadb-operator-controller-manager-7b88bfc995-9bn9t\" (UID: \"37ea4d3a-1d7d-47b2-8eee-1a7601c2de24\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" Jan 09 13:46:18 crc kubenswrapper[4919]: E0109 13:46:18.572844 4919 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:18 crc kubenswrapper[4919]: E0109 13:46:18.572898 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert podName:af1be546-436f-43ef-b748-22860362f61e nodeName:}" failed. No retries permitted until 2026-01-09 13:46:19.072878574 +0000 UTC m=+958.620718204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert") pod "infra-operator-controller-manager-6d99759cf-6s6wp" (UID: "af1be546-436f-43ef-b748-22860362f61e") : secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.574558 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.578197 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.579656 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.599351 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.600615 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.603257 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-54wtw" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.606102 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ws6c\" (UniqueName: \"kubernetes.io/projected/af1be546-436f-43ef-b748-22860362f61e-kube-api-access-4ws6c\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.618226 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.620817 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c6qv\" (UniqueName: \"kubernetes.io/projected/53cc8efc-85ec-4ddf-82c5-c1db01fe8120-kube-api-access-2c6qv\") pod \"manila-operator-controller-manager-598945d5b8-cd2dq\" (UID: \"53cc8efc-85ec-4ddf-82c5-c1db01fe8120\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.621376 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j29l5\" (UniqueName: \"kubernetes.io/projected/33efa14f-00b9-49b4-bc2a-5c0c13d60613-kube-api-access-j29l5\") pod \"keystone-operator-controller-manager-568985c78-r5j45\" (UID: \"33efa14f-00b9-49b4-bc2a-5c0c13d60613\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.624112 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2mwj\" (UniqueName: \"kubernetes.io/projected/4d08a973-3a9e-4098-95fd-d314d9f4e1af-kube-api-access-l2mwj\") pod \"ironic-operator-controller-manager-f99f54bc8-4r7j8\" (UID: \"4d08a973-3a9e-4098-95fd-d314d9f4e1af\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.628773 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.629727 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.631236 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ps6t5" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.638914 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.641471 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.643412 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.643427 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xkfr5" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.644712 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.652109 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.653036 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.657034 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dp4n7" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.661082 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.667675 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.674620 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29hgx\" (UniqueName: \"kubernetes.io/projected/37ea4d3a-1d7d-47b2-8eee-1a7601c2de24-kube-api-access-29hgx\") pod \"mariadb-operator-controller-manager-7b88bfc995-9bn9t\" (UID: \"37ea4d3a-1d7d-47b2-8eee-1a7601c2de24\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.674705 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlwvr\" (UniqueName: \"kubernetes.io/projected/55fe5bfd-cc48-498b-88f7-789a3048a743-kube-api-access-xlwvr\") pod \"neutron-operator-controller-manager-7cd87b778f-jl5xm\" (UID: \"55fe5bfd-cc48-498b-88f7-789a3048a743\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.674914 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69xjq\" (UniqueName: \"kubernetes.io/projected/19ebcfcf-3a6a-4c2c-ab15-2239e08bca09-kube-api-access-69xjq\") pod \"nova-operator-controller-manager-5fbbf8b6cc-jl878\" (UID: \"19ebcfcf-3a6a-4c2c-ab15-2239e08bca09\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.684027 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.684937 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.725587 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69xjq\" (UniqueName: \"kubernetes.io/projected/19ebcfcf-3a6a-4c2c-ab15-2239e08bca09-kube-api-access-69xjq\") pod \"nova-operator-controller-manager-5fbbf8b6cc-jl878\" (UID: \"19ebcfcf-3a6a-4c2c-ab15-2239e08bca09\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.726908 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlwvr\" (UniqueName: \"kubernetes.io/projected/55fe5bfd-cc48-498b-88f7-789a3048a743-kube-api-access-xlwvr\") pod \"neutron-operator-controller-manager-7cd87b778f-jl5xm\" (UID: \"55fe5bfd-cc48-498b-88f7-789a3048a743\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.727131 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2mppp" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.736967 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29hgx\" (UniqueName: \"kubernetes.io/projected/37ea4d3a-1d7d-47b2-8eee-1a7601c2de24-kube-api-access-29hgx\") pod \"mariadb-operator-controller-manager-7b88bfc995-9bn9t\" (UID: \"37ea4d3a-1d7d-47b2-8eee-1a7601c2de24\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.739232 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.774170 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.774947 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.776800 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.782203 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x6ww\" (UniqueName: \"kubernetes.io/projected/58f271ce-d537-4588-ba66-53f08136ee13-kube-api-access-5x6ww\") pod \"placement-operator-controller-manager-9b6f8f78c-8kjrk\" (UID: \"58f271ce-d537-4588-ba66-53f08136ee13\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.782374 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsnr8\" (UniqueName: \"kubernetes.io/projected/7c5b2e5b-6474-46f3-861b-aba8d47c714b-kube-api-access-fsnr8\") pod \"ovn-operator-controller-manager-bf6d4f946-wnwmg\" (UID: \"7c5b2e5b-6474-46f3-861b-aba8d47c714b\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.782439 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.782560 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tc6c\" (UniqueName: \"kubernetes.io/projected/488f8708-4c49-429f-9697-a00b8fadd486-kube-api-access-2tc6c\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.782686 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgh4m\" (UniqueName: \"kubernetes.io/projected/2bf404b6-0f77-4a02-a45a-ad46980755cb-kube-api-access-tgh4m\") pod \"octavia-operator-controller-manager-68c649d9d-4ppq5\" (UID: \"2bf404b6-0f77-4a02-a45a-ad46980755cb\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.783281 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-h67t8" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.792030 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.826125 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.839147 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.840226 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.840311 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.841152 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.844633 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-smsft" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.855187 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.884645 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.887200 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxfdk\" (UniqueName: \"kubernetes.io/projected/782f359d-9941-4528-851a-4db3673cb439-kube-api-access-bxfdk\") pod \"swift-operator-controller-manager-bb586bbf4-47s64\" (UID: \"782f359d-9941-4528-851a-4db3673cb439\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.887281 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tc6c\" (UniqueName: \"kubernetes.io/projected/488f8708-4c49-429f-9697-a00b8fadd486-kube-api-access-2tc6c\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.887351 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgh4m\" (UniqueName: \"kubernetes.io/projected/2bf404b6-0f77-4a02-a45a-ad46980755cb-kube-api-access-tgh4m\") pod \"octavia-operator-controller-manager-68c649d9d-4ppq5\" (UID: \"2bf404b6-0f77-4a02-a45a-ad46980755cb\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.887388 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x6ww\" (UniqueName: \"kubernetes.io/projected/58f271ce-d537-4588-ba66-53f08136ee13-kube-api-access-5x6ww\") pod \"placement-operator-controller-manager-9b6f8f78c-8kjrk\" (UID: \"58f271ce-d537-4588-ba66-53f08136ee13\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.887409 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npn4w\" (UniqueName: \"kubernetes.io/projected/5bd72cd8-70f2-45ef-a451-8468e79eaca9-kube-api-access-npn4w\") pod \"telemetry-operator-controller-manager-68d988df55-wzww9\" (UID: \"5bd72cd8-70f2-45ef-a451-8468e79eaca9\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.887432 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsnr8\" (UniqueName: \"kubernetes.io/projected/7c5b2e5b-6474-46f3-861b-aba8d47c714b-kube-api-access-fsnr8\") pod \"ovn-operator-controller-manager-bf6d4f946-wnwmg\" (UID: \"7c5b2e5b-6474-46f3-861b-aba8d47c714b\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.887456 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:18 crc kubenswrapper[4919]: E0109 13:46:18.887560 4919 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 13:46:18 crc kubenswrapper[4919]: E0109 13:46:18.887601 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert podName:488f8708-4c49-429f-9697-a00b8fadd486 nodeName:}" failed. No retries permitted until 2026-01-09 13:46:19.387586981 +0000 UTC m=+958.935426431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert") pod "openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" (UID: "488f8708-4c49-429f-9697-a00b8fadd486") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.913774 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x6ww\" (UniqueName: \"kubernetes.io/projected/58f271ce-d537-4588-ba66-53f08136ee13-kube-api-access-5x6ww\") pod \"placement-operator-controller-manager-9b6f8f78c-8kjrk\" (UID: \"58f271ce-d537-4588-ba66-53f08136ee13\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.915959 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgh4m\" (UniqueName: \"kubernetes.io/projected/2bf404b6-0f77-4a02-a45a-ad46980755cb-kube-api-access-tgh4m\") pod \"octavia-operator-controller-manager-68c649d9d-4ppq5\" (UID: \"2bf404b6-0f77-4a02-a45a-ad46980755cb\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.922479 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsnr8\" (UniqueName: \"kubernetes.io/projected/7c5b2e5b-6474-46f3-861b-aba8d47c714b-kube-api-access-fsnr8\") pod \"ovn-operator-controller-manager-bf6d4f946-wnwmg\" (UID: \"7c5b2e5b-6474-46f3-861b-aba8d47c714b\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.928929 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tc6c\" (UniqueName: \"kubernetes.io/projected/488f8708-4c49-429f-9697-a00b8fadd486-kube-api-access-2tc6c\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.929000 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx"] Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.930037 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.932230 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.933302 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-wb27b" Jan 09 13:46:18 crc kubenswrapper[4919]: I0109 13:46:18.966019 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:18.985623 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx"] Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.030400 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlv67\" (UniqueName: \"kubernetes.io/projected/7c1ac56d-4f45-4102-8336-2cec59c44d9d-kube-api-access-rlv67\") pod \"test-operator-controller-manager-6c866cfdcb-84x8m\" (UID: \"7c1ac56d-4f45-4102-8336-2cec59c44d9d\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.030473 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npn4w\" (UniqueName: \"kubernetes.io/projected/5bd72cd8-70f2-45ef-a451-8468e79eaca9-kube-api-access-npn4w\") pod \"telemetry-operator-controller-manager-68d988df55-wzww9\" (UID: \"5bd72cd8-70f2-45ef-a451-8468e79eaca9\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.030539 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxfdk\" (UniqueName: \"kubernetes.io/projected/782f359d-9941-4528-851a-4db3673cb439-kube-api-access-bxfdk\") pod \"swift-operator-controller-manager-bb586bbf4-47s64\" (UID: \"782f359d-9941-4528-851a-4db3673cb439\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.030688 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.049077 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.104471 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn"] Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.105606 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.119008 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.119044 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.119102 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-vkkcj" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.119838 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn"] Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.127683 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npn4w\" (UniqueName: \"kubernetes.io/projected/5bd72cd8-70f2-45ef-a451-8468e79eaca9-kube-api-access-npn4w\") pod \"telemetry-operator-controller-manager-68d988df55-wzww9\" (UID: \"5bd72cd8-70f2-45ef-a451-8468e79eaca9\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.137356 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7"] Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.139717 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.144022 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-5ds5q" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.146244 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7"] Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.148502 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.148580 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlv67\" (UniqueName: \"kubernetes.io/projected/7c1ac56d-4f45-4102-8336-2cec59c44d9d-kube-api-access-rlv67\") pod \"test-operator-controller-manager-6c866cfdcb-84x8m\" (UID: \"7c1ac56d-4f45-4102-8336-2cec59c44d9d\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.148614 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbz8g\" (UniqueName: \"kubernetes.io/projected/4f5bfa64-2b7e-4b30-aedc-56cd44f47032-kube-api-access-xbz8g\") pod \"watcher-operator-controller-manager-9dbdf6486-nk5sx\" (UID: \"4f5bfa64-2b7e-4b30-aedc-56cd44f47032\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.149190 4919 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.149242 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert podName:af1be546-436f-43ef-b748-22860362f61e nodeName:}" failed. No retries permitted until 2026-01-09 13:46:20.149227239 +0000 UTC m=+959.697066689 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert") pod "infra-operator-controller-manager-6d99759cf-6s6wp" (UID: "af1be546-436f-43ef-b748-22860362f61e") : secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.154840 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.157302 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxfdk\" (UniqueName: \"kubernetes.io/projected/782f359d-9941-4528-851a-4db3673cb439-kube-api-access-bxfdk\") pod \"swift-operator-controller-manager-bb586bbf4-47s64\" (UID: \"782f359d-9941-4528-851a-4db3673cb439\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.186787 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlv67\" (UniqueName: \"kubernetes.io/projected/7c1ac56d-4f45-4102-8336-2cec59c44d9d-kube-api-access-rlv67\") pod \"test-operator-controller-manager-6c866cfdcb-84x8m\" (UID: \"7c1ac56d-4f45-4102-8336-2cec59c44d9d\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.188616 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.252788 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txdxs\" (UniqueName: \"kubernetes.io/projected/e77d7646-4198-42f3-ac22-f0974b18a0ab-kube-api-access-txdxs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.252837 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9c9k\" (UniqueName: \"kubernetes.io/projected/e9f24ed0-e850-4906-901d-b23777cf500f-kube-api-access-r9c9k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qg5n7\" (UID: \"e9f24ed0-e850-4906-901d-b23777cf500f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.252873 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbz8g\" (UniqueName: \"kubernetes.io/projected/4f5bfa64-2b7e-4b30-aedc-56cd44f47032-kube-api-access-xbz8g\") pod \"watcher-operator-controller-manager-9dbdf6486-nk5sx\" (UID: \"4f5bfa64-2b7e-4b30-aedc-56cd44f47032\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.252908 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.252924 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.360576 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txdxs\" (UniqueName: \"kubernetes.io/projected/e77d7646-4198-42f3-ac22-f0974b18a0ab-kube-api-access-txdxs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.360619 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9c9k\" (UniqueName: \"kubernetes.io/projected/e9f24ed0-e850-4906-901d-b23777cf500f-kube-api-access-r9c9k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qg5n7\" (UID: \"e9f24ed0-e850-4906-901d-b23777cf500f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.360659 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.360674 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.360806 4919 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.360857 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs podName:e77d7646-4198-42f3-ac22-f0974b18a0ab nodeName:}" failed. No retries permitted until 2026-01-09 13:46:19.860839222 +0000 UTC m=+959.408678672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs") pod "openstack-operator-controller-manager-5fb94578dd-p4xfn" (UID: "e77d7646-4198-42f3-ac22-f0974b18a0ab") : secret "metrics-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.360904 4919 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.360976 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs podName:e77d7646-4198-42f3-ac22-f0974b18a0ab nodeName:}" failed. No retries permitted until 2026-01-09 13:46:19.860955634 +0000 UTC m=+959.408795314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs") pod "openstack-operator-controller-manager-5fb94578dd-p4xfn" (UID: "e77d7646-4198-42f3-ac22-f0974b18a0ab") : secret "webhook-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.361750 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbz8g\" (UniqueName: \"kubernetes.io/projected/4f5bfa64-2b7e-4b30-aedc-56cd44f47032-kube-api-access-xbz8g\") pod \"watcher-operator-controller-manager-9dbdf6486-nk5sx\" (UID: \"4f5bfa64-2b7e-4b30-aedc-56cd44f47032\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.376813 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.381649 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9c9k\" (UniqueName: \"kubernetes.io/projected/e9f24ed0-e850-4906-901d-b23777cf500f-kube-api-access-r9c9k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qg5n7\" (UID: \"e9f24ed0-e850-4906-901d-b23777cf500f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.399846 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txdxs\" (UniqueName: \"kubernetes.io/projected/e77d7646-4198-42f3-ac22-f0974b18a0ab-kube-api-access-txdxs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.474947 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.475117 4919 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.475198 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert podName:488f8708-4c49-429f-9697-a00b8fadd486 nodeName:}" failed. No retries permitted until 2026-01-09 13:46:20.475178272 +0000 UTC m=+960.023017722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert") pod "openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" (UID: "488f8708-4c49-429f-9697-a00b8fadd486") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.479871 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.572564 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.916850 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:19 crc kubenswrapper[4919]: I0109 13:46:19.917082 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.917232 4919 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.917302 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs podName:e77d7646-4198-42f3-ac22-f0974b18a0ab nodeName:}" failed. No retries permitted until 2026-01-09 13:46:20.9172878 +0000 UTC m=+960.465127250 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs") pod "openstack-operator-controller-manager-5fb94578dd-p4xfn" (UID: "e77d7646-4198-42f3-ac22-f0974b18a0ab") : secret "metrics-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.917394 4919 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 13:46:19 crc kubenswrapper[4919]: E0109 13:46:19.917435 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs podName:e77d7646-4198-42f3-ac22-f0974b18a0ab nodeName:}" failed. No retries permitted until 2026-01-09 13:46:20.917422223 +0000 UTC m=+960.465261673 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs") pod "openstack-operator-controller-manager-5fb94578dd-p4xfn" (UID: "e77d7646-4198-42f3-ac22-f0974b18a0ab") : secret "webhook-server-cert" not found Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.186454 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:20 crc kubenswrapper[4919]: E0109 13:46:20.186624 4919 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:20 crc kubenswrapper[4919]: E0109 13:46:20.186686 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert podName:af1be546-436f-43ef-b748-22860362f61e nodeName:}" failed. No retries permitted until 2026-01-09 13:46:22.186668947 +0000 UTC m=+961.734508397 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert") pod "infra-operator-controller-manager-6d99759cf-6s6wp" (UID: "af1be546-436f-43ef-b748-22860362f61e") : secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.531178 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:20 crc kubenswrapper[4919]: E0109 13:46:20.531381 4919 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 13:46:20 crc kubenswrapper[4919]: E0109 13:46:20.531466 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert podName:488f8708-4c49-429f-9697-a00b8fadd486 nodeName:}" failed. No retries permitted until 2026-01-09 13:46:22.531445291 +0000 UTC m=+962.079284741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert") pod "openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" (UID: "488f8708-4c49-429f-9697-a00b8fadd486") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.542076 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p8wlj"] Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.682595 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk"] Jan 09 13:46:20 crc kubenswrapper[4919]: W0109 13:46:20.684447 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod276f41de_c875_40be_816a_84eb02212fda.slice/crio-716de6ea1079ea1a4ee22604f0b5dea5e3b94baf28224024182318c55d7254c8 WatchSource:0}: Error finding container 716de6ea1079ea1a4ee22604f0b5dea5e3b94baf28224024182318c55d7254c8: Status 404 returned error can't find the container with id 716de6ea1079ea1a4ee22604f0b5dea5e3b94baf28224024182318c55d7254c8 Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.823880 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9"] Jan 09 13:46:20 crc kubenswrapper[4919]: W0109 13:46:20.832932 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0081380_9d2e_40bb_8cc9_f124d4fbfd25.slice/crio-7e0640c1d18440d85029df22d64fb9a09e333b86333d1478f6eaa12f5194d2c2 WatchSource:0}: Error finding container 7e0640c1d18440d85029df22d64fb9a09e333b86333d1478f6eaa12f5194d2c2: Status 404 returned error can't find the container with id 7e0640c1d18440d85029df22d64fb9a09e333b86333d1478f6eaa12f5194d2c2 Jan 09 13:46:20 crc kubenswrapper[4919]: W0109 13:46:20.868161 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb46937ef_2f83_4864_b0d4_5464ed82e1b8.slice/crio-bcf432252f50667b21a6abba3912632da4cffd78ef46009a227d486a3a17ea48 WatchSource:0}: Error finding container bcf432252f50667b21a6abba3912632da4cffd78ef46009a227d486a3a17ea48: Status 404 returned error can't find the container with id bcf432252f50667b21a6abba3912632da4cffd78ef46009a227d486a3a17ea48 Jan 09 13:46:20 crc kubenswrapper[4919]: W0109 13:46:20.868526 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60feaa4f_ca73_4e59_a85f_c17132f8f708.slice/crio-2f0f9899221acafddcf0663c731e6231ebe6283364ec50a0a500fcb2d82f9906 WatchSource:0}: Error finding container 2f0f9899221acafddcf0663c731e6231ebe6283364ec50a0a500fcb2d82f9906: Status 404 returned error can't find the container with id 2f0f9899221acafddcf0663c731e6231ebe6283364ec50a0a500fcb2d82f9906 Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.872695 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z"] Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.886459 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg"] Jan 09 13:46:20 crc kubenswrapper[4919]: W0109 13:46:20.931188 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7716ced4_dfb9_4a5c_936f_65edbf78f5dd.slice/crio-23ba46288201a0c1f097461b61beae8b92aed120abe3f353481df76b0acf6318 WatchSource:0}: Error finding container 23ba46288201a0c1f097461b61beae8b92aed120abe3f353481df76b0acf6318: Status 404 returned error can't find the container with id 23ba46288201a0c1f097461b61beae8b92aed120abe3f353481df76b0acf6318 Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.931830 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7"] Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.935879 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.935915 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:20 crc kubenswrapper[4919]: E0109 13:46:20.936121 4919 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 13:46:20 crc kubenswrapper[4919]: E0109 13:46:20.936167 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs podName:e77d7646-4198-42f3-ac22-f0974b18a0ab nodeName:}" failed. No retries permitted until 2026-01-09 13:46:22.936151543 +0000 UTC m=+962.483990993 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs") pod "openstack-operator-controller-manager-5fb94578dd-p4xfn" (UID: "e77d7646-4198-42f3-ac22-f0974b18a0ab") : secret "metrics-server-cert" not found Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.939175 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg" event={"ID":"60feaa4f-ca73-4e59-a85f-c17132f8f708","Type":"ContainerStarted","Data":"2f0f9899221acafddcf0663c731e6231ebe6283364ec50a0a500fcb2d82f9906"} Jan 09 13:46:20 crc kubenswrapper[4919]: E0109 13:46:20.943006 4919 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 13:46:20 crc kubenswrapper[4919]: E0109 13:46:20.943125 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs podName:e77d7646-4198-42f3-ac22-f0974b18a0ab nodeName:}" failed. No retries permitted until 2026-01-09 13:46:22.943084003 +0000 UTC m=+962.490923453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs") pod "openstack-operator-controller-manager-5fb94578dd-p4xfn" (UID: "e77d7646-4198-42f3-ac22-f0974b18a0ab") : secret "webhook-server-cert" not found Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.945442 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" event={"ID":"276f41de-c875-40be-816a-84eb02212fda","Type":"ContainerStarted","Data":"716de6ea1079ea1a4ee22604f0b5dea5e3b94baf28224024182318c55d7254c8"} Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.950250 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8"] Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.954402 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" event={"ID":"d0081380-9d2e-40bb-8cc9-f124d4fbfd25","Type":"ContainerStarted","Data":"7e0640c1d18440d85029df22d64fb9a09e333b86333d1478f6eaa12f5194d2c2"} Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.955832 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq"] Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.957731 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8wlj" event={"ID":"ceacd617-f87e-4765-9a75-9cde47b80e8d","Type":"ContainerStarted","Data":"bdd5d28e318017e60d5ef9420e7d32aee1a13d9ce653722a7f1a73333065c46c"} Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.959633 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" event={"ID":"b46937ef-2f83-4864-b0d4-5464ed82e1b8","Type":"ContainerStarted","Data":"bcf432252f50667b21a6abba3912632da4cffd78ef46009a227d486a3a17ea48"} Jan 09 13:46:20 crc kubenswrapper[4919]: W0109 13:46:20.964884 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53cc8efc_85ec_4ddf_82c5_c1db01fe8120.slice/crio-0c4d0bf725ca37f90bf85064d6fd59febb285ad26c2d2d253535a15fa9b70d55 WatchSource:0}: Error finding container 0c4d0bf725ca37f90bf85064d6fd59febb285ad26c2d2d253535a15fa9b70d55: Status 404 returned error can't find the container with id 0c4d0bf725ca37f90bf85064d6fd59febb285ad26c2d2d253535a15fa9b70d55 Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.966506 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-r5j45"] Jan 09 13:46:20 crc kubenswrapper[4919]: W0109 13:46:20.980135 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33efa14f_00b9_49b4_bc2a_5c0c13d60613.slice/crio-a4a0b1b8f29fe688b0fe5a4379df60ac3c7bfb8961566573c6ffe395ff821554 WatchSource:0}: Error finding container a4a0b1b8f29fe688b0fe5a4379df60ac3c7bfb8961566573c6ffe395ff821554: Status 404 returned error can't find the container with id a4a0b1b8f29fe688b0fe5a4379df60ac3c7bfb8961566573c6ffe395ff821554 Jan 09 13:46:20 crc kubenswrapper[4919]: I0109 13:46:20.982010 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9"] Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.076779 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878"] Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.083189 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9"] Jan 09 13:46:21 crc kubenswrapper[4919]: W0109 13:46:21.105146 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bd72cd8_70f2_45ef_a451_8468e79eaca9.slice/crio-5703a21e9e84d0150121063afb3e30eaa1901ac330c8f60bbcbea4538d1ad41e WatchSource:0}: Error finding container 5703a21e9e84d0150121063afb3e30eaa1901ac330c8f60bbcbea4538d1ad41e: Status 404 returned error can't find the container with id 5703a21e9e84d0150121063afb3e30eaa1901ac330c8f60bbcbea4538d1ad41e Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.202790 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm"] Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.208180 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64"] Jan 09 13:46:21 crc kubenswrapper[4919]: W0109 13:46:21.216450 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55fe5bfd_cc48_498b_88f7_789a3048a743.slice/crio-e3b1539b725d6c23b0e27fe06834a364c1b367fd7cd8da5c6972f717b1c587ab WatchSource:0}: Error finding container e3b1539b725d6c23b0e27fe06834a364c1b367fd7cd8da5c6972f717b1c587ab: Status 404 returned error can't find the container with id e3b1539b725d6c23b0e27fe06834a364c1b367fd7cd8da5c6972f717b1c587ab Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.218622 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xlwvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7cd87b778f-jl5xm_openstack-operators(55fe5bfd-cc48-498b-88f7-789a3048a743): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.219139 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m"] Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.220068 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" podUID="55fe5bfd-cc48-498b-88f7-789a3048a743" Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.234109 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rlv67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-84x8m_openstack-operators(7c1ac56d-4f45-4102-8336-2cec59c44d9d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.236987 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" podUID="7c1ac56d-4f45-4102-8336-2cec59c44d9d" Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.246504 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.246550 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.246587 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7"] Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.254266 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r9c9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qg5n7_openstack-operators(e9f24ed0-e850-4906-901d-b23777cf500f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.255614 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" podUID="e9f24ed0-e850-4906-901d-b23777cf500f" Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.387338 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t"] Jan 09 13:46:21 crc kubenswrapper[4919]: W0109 13:46:21.398604 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37ea4d3a_1d7d_47b2_8eee_1a7601c2de24.slice/crio-599fe40c6122c40bf8923451cd707eae5601198b1362b17f6c3a026cb3c10088 WatchSource:0}: Error finding container 599fe40c6122c40bf8923451cd707eae5601198b1362b17f6c3a026cb3c10088: Status 404 returned error can't find the container with id 599fe40c6122c40bf8923451cd707eae5601198b1362b17f6c3a026cb3c10088 Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.400386 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk"] Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.408811 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx"] Jan 09 13:46:21 crc kubenswrapper[4919]: W0109 13:46:21.412343 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58f271ce_d537_4588_ba66_53f08136ee13.slice/crio-92a934882c867036f4aa2d8cc43d5245f7bca78776657798310b5ef51cd93286 WatchSource:0}: Error finding container 92a934882c867036f4aa2d8cc43d5245f7bca78776657798310b5ef51cd93286: Status 404 returned error can't find the container with id 92a934882c867036f4aa2d8cc43d5245f7bca78776657798310b5ef51cd93286 Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.413800 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg"] Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.417640 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5x6ww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-9b6f8f78c-8kjrk_openstack-operators(58f271ce-d537-4588-ba66-53f08136ee13): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.418772 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" podUID="58f271ce-d537-4588-ba66-53f08136ee13" Jan 09 13:46:21 crc kubenswrapper[4919]: W0109 13:46:21.423548 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f5bfa64_2b7e_4b30_aedc_56cd44f47032.slice/crio-1ce6b5afd2012185b87765269b6d27517b22591955268f0517768804d85d0706 WatchSource:0}: Error finding container 1ce6b5afd2012185b87765269b6d27517b22591955268f0517768804d85d0706: Status 404 returned error can't find the container with id 1ce6b5afd2012185b87765269b6d27517b22591955268f0517768804d85d0706 Jan 09 13:46:21 crc kubenswrapper[4919]: W0109 13:46:21.426418 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c5b2e5b_6474_46f3_861b_aba8d47c714b.slice/crio-a707fa58f8bba6b36c02238796a96b0e46b95ca4afd1683a1636f55d4a0b7a92 WatchSource:0}: Error finding container a707fa58f8bba6b36c02238796a96b0e46b95ca4afd1683a1636f55d4a0b7a92: Status 404 returned error can't find the container with id a707fa58f8bba6b36c02238796a96b0e46b95ca4afd1683a1636f55d4a0b7a92 Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.426819 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xbz8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-nk5sx_openstack-operators(4f5bfa64-2b7e-4b30-aedc-56cd44f47032): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.428061 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" podUID="4f5bfa64-2b7e-4b30-aedc-56cd44f47032" Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.428921 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fsnr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-wnwmg_openstack-operators(7c5b2e5b-6474-46f3-861b-aba8d47c714b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 13:46:21 crc kubenswrapper[4919]: W0109 13:46:21.428981 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bf404b6_0f77_4a02_a45a_ad46980755cb.slice/crio-f8f235754e32b8ef63ab92a00602c11708a3bbe020e39e3a32de61d7d7383002 WatchSource:0}: Error finding container f8f235754e32b8ef63ab92a00602c11708a3bbe020e39e3a32de61d7d7383002: Status 404 returned error can't find the container with id f8f235754e32b8ef63ab92a00602c11708a3bbe020e39e3a32de61d7d7383002 Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.430272 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5"] Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.430909 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" podUID="7c5b2e5b-6474-46f3-861b-aba8d47c714b" Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.432007 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgh4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-4ppq5_openstack-operators(2bf404b6-0f77-4a02-a45a-ad46980755cb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.433323 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" podUID="2bf404b6-0f77-4a02-a45a-ad46980755cb" Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.969420 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8" event={"ID":"4d08a973-3a9e-4098-95fd-d314d9f4e1af","Type":"ContainerStarted","Data":"9d8225ffc68c200e4f32254fc802201f4589b706c621fd0386efb6db5c19a597"} Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.971633 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" event={"ID":"53cc8efc-85ec-4ddf-82c5-c1db01fe8120","Type":"ContainerStarted","Data":"0c4d0bf725ca37f90bf85064d6fd59febb285ad26c2d2d253535a15fa9b70d55"} Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.973165 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" event={"ID":"7635e70a-4259-4c43-91b7-eae6fc0d3c12","Type":"ContainerStarted","Data":"c4fe9e6aa77540c874250dab182a50860f187df76cbfff7fa73aaec876c47341"} Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.974876 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" event={"ID":"33efa14f-00b9-49b4-bc2a-5c0c13d60613","Type":"ContainerStarted","Data":"a4a0b1b8f29fe688b0fe5a4379df60ac3c7bfb8961566573c6ffe395ff821554"} Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.976120 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" event={"ID":"7c5b2e5b-6474-46f3-861b-aba8d47c714b","Type":"ContainerStarted","Data":"a707fa58f8bba6b36c02238796a96b0e46b95ca4afd1683a1636f55d4a0b7a92"} Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.978317 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" podUID="7c5b2e5b-6474-46f3-861b-aba8d47c714b" Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.981067 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9" event={"ID":"5bd72cd8-70f2-45ef-a451-8468e79eaca9","Type":"ContainerStarted","Data":"5703a21e9e84d0150121063afb3e30eaa1901ac330c8f60bbcbea4538d1ad41e"} Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.982529 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" event={"ID":"e9f24ed0-e850-4906-901d-b23777cf500f","Type":"ContainerStarted","Data":"cf18becd67f41af713b27417e52243f89b23a3979ea128e63f7c0134fe3aa9b7"} Jan 09 13:46:21 crc kubenswrapper[4919]: E0109 13:46:21.984784 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" podUID="e9f24ed0-e850-4906-901d-b23777cf500f" Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.987202 4919 generic.go:334] "Generic (PLEG): container finished" podID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerID="8d305180bc3018e0a7d4ecf9c50d6730c701f23859953097b523228ca6a35a64" exitCode=0 Jan 09 13:46:21 crc kubenswrapper[4919]: I0109 13:46:21.988245 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8wlj" event={"ID":"ceacd617-f87e-4765-9a75-9cde47b80e8d","Type":"ContainerDied","Data":"8d305180bc3018e0a7d4ecf9c50d6730c701f23859953097b523228ca6a35a64"} Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.000732 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" event={"ID":"19ebcfcf-3a6a-4c2c-ab15-2239e08bca09","Type":"ContainerStarted","Data":"89d191b76cc0e85a88b572b21a7a7177d7eb5dc1c2109a1880f1040b159d8616"} Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.002620 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" event={"ID":"4f5bfa64-2b7e-4b30-aedc-56cd44f47032","Type":"ContainerStarted","Data":"1ce6b5afd2012185b87765269b6d27517b22591955268f0517768804d85d0706"} Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.004468 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" podUID="4f5bfa64-2b7e-4b30-aedc-56cd44f47032" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.006462 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64" event={"ID":"782f359d-9941-4528-851a-4db3673cb439","Type":"ContainerStarted","Data":"70d8b1689d6adf89bf146904d58695969c3101d08143754737aa029fd0d473ff"} Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.011590 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7" event={"ID":"7716ced4-dfb9-4a5c-936f-65edbf78f5dd","Type":"ContainerStarted","Data":"23ba46288201a0c1f097461b61beae8b92aed120abe3f353481df76b0acf6318"} Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.022110 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" event={"ID":"2bf404b6-0f77-4a02-a45a-ad46980755cb","Type":"ContainerStarted","Data":"f8f235754e32b8ef63ab92a00602c11708a3bbe020e39e3a32de61d7d7383002"} Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.023535 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" event={"ID":"58f271ce-d537-4588-ba66-53f08136ee13","Type":"ContainerStarted","Data":"92a934882c867036f4aa2d8cc43d5245f7bca78776657798310b5ef51cd93286"} Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.024404 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" podUID="2bf404b6-0f77-4a02-a45a-ad46980755cb" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.024740 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" event={"ID":"55fe5bfd-cc48-498b-88f7-789a3048a743","Type":"ContainerStarted","Data":"e3b1539b725d6c23b0e27fe06834a364c1b367fd7cd8da5c6972f717b1c587ab"} Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.027903 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" podUID="58f271ce-d537-4588-ba66-53f08136ee13" Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.027906 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" podUID="55fe5bfd-cc48-498b-88f7-789a3048a743" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.029004 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" event={"ID":"7c1ac56d-4f45-4102-8336-2cec59c44d9d","Type":"ContainerStarted","Data":"4d9b2f3d0ea3443b1cbd97dcf6813c43700ee5ecbb6734f9902108b1cea996f5"} Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.030977 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" podUID="7c1ac56d-4f45-4102-8336-2cec59c44d9d" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.030981 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" event={"ID":"37ea4d3a-1d7d-47b2-8eee-1a7601c2de24","Type":"ContainerStarted","Data":"599fe40c6122c40bf8923451cd707eae5601198b1362b17f6c3a026cb3c10088"} Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.257865 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.258152 4919 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.258271 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert podName:af1be546-436f-43ef-b748-22860362f61e nodeName:}" failed. No retries permitted until 2026-01-09 13:46:26.258248932 +0000 UTC m=+965.806088392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert") pod "infra-operator-controller-manager-6d99759cf-6s6wp" (UID: "af1be546-436f-43ef-b748-22860362f61e") : secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.562505 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.562790 4919 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.562845 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert podName:488f8708-4c49-429f-9697-a00b8fadd486 nodeName:}" failed. No retries permitted until 2026-01-09 13:46:26.562831121 +0000 UTC m=+966.110670571 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert") pod "openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" (UID: "488f8708-4c49-429f-9697-a00b8fadd486") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.644305 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sxwmh"] Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.647137 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.657778 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxwmh"] Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.771037 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txklq\" (UniqueName: \"kubernetes.io/projected/483268ae-fdeb-41d5-aaa9-20ab30abc131-kube-api-access-txklq\") pod \"redhat-marketplace-sxwmh\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.771204 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-catalog-content\") pod \"redhat-marketplace-sxwmh\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.771324 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-utilities\") pod \"redhat-marketplace-sxwmh\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.872188 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-catalog-content\") pod \"redhat-marketplace-sxwmh\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.872251 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-utilities\") pod \"redhat-marketplace-sxwmh\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.872296 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txklq\" (UniqueName: \"kubernetes.io/projected/483268ae-fdeb-41d5-aaa9-20ab30abc131-kube-api-access-txklq\") pod \"redhat-marketplace-sxwmh\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.872666 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-catalog-content\") pod \"redhat-marketplace-sxwmh\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.873056 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-utilities\") pod \"redhat-marketplace-sxwmh\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.905180 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txklq\" (UniqueName: \"kubernetes.io/projected/483268ae-fdeb-41d5-aaa9-20ab30abc131-kube-api-access-txklq\") pod \"redhat-marketplace-sxwmh\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.973887 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.974446 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.974076 4919 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.974718 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs podName:e77d7646-4198-42f3-ac22-f0974b18a0ab nodeName:}" failed. No retries permitted until 2026-01-09 13:46:26.974696568 +0000 UTC m=+966.522536018 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs") pod "openstack-operator-controller-manager-5fb94578dd-p4xfn" (UID: "e77d7646-4198-42f3-ac22-f0974b18a0ab") : secret "webhook-server-cert" not found Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.974637 4919 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 13:46:22 crc kubenswrapper[4919]: E0109 13:46:22.975130 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs podName:e77d7646-4198-42f3-ac22-f0974b18a0ab nodeName:}" failed. No retries permitted until 2026-01-09 13:46:26.975113948 +0000 UTC m=+966.522953398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs") pod "openstack-operator-controller-manager-5fb94578dd-p4xfn" (UID: "e77d7646-4198-42f3-ac22-f0974b18a0ab") : secret "metrics-server-cert" not found Jan 09 13:46:22 crc kubenswrapper[4919]: I0109 13:46:22.975448 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:46:23 crc kubenswrapper[4919]: E0109 13:46:23.043939 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" podUID="58f271ce-d537-4588-ba66-53f08136ee13" Jan 09 13:46:23 crc kubenswrapper[4919]: E0109 13:46:23.044662 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" podUID="2bf404b6-0f77-4a02-a45a-ad46980755cb" Jan 09 13:46:23 crc kubenswrapper[4919]: E0109 13:46:23.044875 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" podUID="7c1ac56d-4f45-4102-8336-2cec59c44d9d" Jan 09 13:46:23 crc kubenswrapper[4919]: E0109 13:46:23.044866 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" podUID="e9f24ed0-e850-4906-901d-b23777cf500f" Jan 09 13:46:23 crc kubenswrapper[4919]: E0109 13:46:23.046235 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" podUID="55fe5bfd-cc48-498b-88f7-789a3048a743" Jan 09 13:46:23 crc kubenswrapper[4919]: E0109 13:46:23.046344 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" podUID="4f5bfa64-2b7e-4b30-aedc-56cd44f47032" Jan 09 13:46:23 crc kubenswrapper[4919]: E0109 13:46:23.046998 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" podUID="7c5b2e5b-6474-46f3-861b-aba8d47c714b" Jan 09 13:46:23 crc kubenswrapper[4919]: I0109 13:46:23.720239 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxwmh"] Jan 09 13:46:24 crc kubenswrapper[4919]: I0109 13:46:24.076520 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8wlj" event={"ID":"ceacd617-f87e-4765-9a75-9cde47b80e8d","Type":"ContainerStarted","Data":"300b1a41701269c261475da24a0084827d458bac4ea9f1c4b9920a4bb658b7dd"} Jan 09 13:46:25 crc kubenswrapper[4919]: I0109 13:46:25.129624 4919 generic.go:334] "Generic (PLEG): container finished" podID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerID="300b1a41701269c261475da24a0084827d458bac4ea9f1c4b9920a4bb658b7dd" exitCode=0 Jan 09 13:46:25 crc kubenswrapper[4919]: I0109 13:46:25.129664 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8wlj" event={"ID":"ceacd617-f87e-4765-9a75-9cde47b80e8d","Type":"ContainerDied","Data":"300b1a41701269c261475da24a0084827d458bac4ea9f1c4b9920a4bb658b7dd"} Jan 09 13:46:26 crc kubenswrapper[4919]: I0109 13:46:26.355147 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:26 crc kubenswrapper[4919]: E0109 13:46:26.355423 4919 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:26 crc kubenswrapper[4919]: E0109 13:46:26.355527 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert podName:af1be546-436f-43ef-b748-22860362f61e nodeName:}" failed. No retries permitted until 2026-01-09 13:46:34.355503098 +0000 UTC m=+973.903342548 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert") pod "infra-operator-controller-manager-6d99759cf-6s6wp" (UID: "af1be546-436f-43ef-b748-22860362f61e") : secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:26 crc kubenswrapper[4919]: I0109 13:46:26.659448 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:26 crc kubenswrapper[4919]: E0109 13:46:26.659792 4919 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 13:46:26 crc kubenswrapper[4919]: E0109 13:46:26.659937 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert podName:488f8708-4c49-429f-9697-a00b8fadd486 nodeName:}" failed. No retries permitted until 2026-01-09 13:46:34.659904793 +0000 UTC m=+974.207744253 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert") pod "openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" (UID: "488f8708-4c49-429f-9697-a00b8fadd486") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 13:46:27 crc kubenswrapper[4919]: I0109 13:46:27.065310 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:27 crc kubenswrapper[4919]: I0109 13:46:27.065356 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:27 crc kubenswrapper[4919]: E0109 13:46:27.065647 4919 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 13:46:27 crc kubenswrapper[4919]: E0109 13:46:27.065763 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs podName:e77d7646-4198-42f3-ac22-f0974b18a0ab nodeName:}" failed. No retries permitted until 2026-01-09 13:46:35.065722022 +0000 UTC m=+974.613561462 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs") pod "openstack-operator-controller-manager-5fb94578dd-p4xfn" (UID: "e77d7646-4198-42f3-ac22-f0974b18a0ab") : secret "metrics-server-cert" not found Jan 09 13:46:27 crc kubenswrapper[4919]: E0109 13:46:27.065788 4919 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 13:46:27 crc kubenswrapper[4919]: E0109 13:46:27.066027 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs podName:e77d7646-4198-42f3-ac22-f0974b18a0ab nodeName:}" failed. No retries permitted until 2026-01-09 13:46:35.065920657 +0000 UTC m=+974.613760127 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs") pod "openstack-operator-controller-manager-5fb94578dd-p4xfn" (UID: "e77d7646-4198-42f3-ac22-f0974b18a0ab") : secret "webhook-server-cert" not found Jan 09 13:46:34 crc kubenswrapper[4919]: I0109 13:46:34.382221 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:34 crc kubenswrapper[4919]: E0109 13:46:34.382368 4919 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:34 crc kubenswrapper[4919]: E0109 13:46:34.383260 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert podName:af1be546-436f-43ef-b748-22860362f61e nodeName:}" failed. No retries permitted until 2026-01-09 13:46:50.383236564 +0000 UTC m=+989.931076014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert") pod "infra-operator-controller-manager-6d99759cf-6s6wp" (UID: "af1be546-436f-43ef-b748-22860362f61e") : secret "infra-operator-webhook-server-cert" not found Jan 09 13:46:34 crc kubenswrapper[4919]: I0109 13:46:34.688078 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:34 crc kubenswrapper[4919]: I0109 13:46:34.701147 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/488f8708-4c49-429f-9697-a00b8fadd486-cert\") pod \"openstack-baremetal-operator-controller-manager-75f6ff484-ll94k\" (UID: \"488f8708-4c49-429f-9697-a00b8fadd486\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:34 crc kubenswrapper[4919]: I0109 13:46:34.918069 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xkfr5" Jan 09 13:46:34 crc kubenswrapper[4919]: I0109 13:46:34.926314 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:46:35 crc kubenswrapper[4919]: I0109 13:46:35.108900 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:35 crc kubenswrapper[4919]: I0109 13:46:35.108983 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:35 crc kubenswrapper[4919]: I0109 13:46:35.134436 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-metrics-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:35 crc kubenswrapper[4919]: I0109 13:46:35.314845 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e77d7646-4198-42f3-ac22-f0974b18a0ab-webhook-certs\") pod \"openstack-operator-controller-manager-5fb94578dd-p4xfn\" (UID: \"e77d7646-4198-42f3-ac22-f0974b18a0ab\") " pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:35 crc kubenswrapper[4919]: I0109 13:46:35.370188 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-vkkcj" Jan 09 13:46:35 crc kubenswrapper[4919]: I0109 13:46:35.377723 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:38 crc kubenswrapper[4919]: I0109 13:46:38.753996 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 13:46:42 crc kubenswrapper[4919]: E0109 13:46:42.044076 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:174acf70c084144827fb8f96c5401a0a8def953bf0ff8929dccd629a550491b7" Jan 09 13:46:42 crc kubenswrapper[4919]: E0109 13:46:42.046768 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:174acf70c084144827fb8f96c5401a0a8def953bf0ff8929dccd629a550491b7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4csgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-78979fc445-m56bk_openstack-operators(276f41de-c875-40be-816a-84eb02212fda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:46:42 crc kubenswrapper[4919]: E0109 13:46:42.048181 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" podUID="276f41de-c875-40be-816a-84eb02212fda" Jan 09 13:46:42 crc kubenswrapper[4919]: E0109 13:46:42.403318 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:174acf70c084144827fb8f96c5401a0a8def953bf0ff8929dccd629a550491b7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" podUID="276f41de-c875-40be-816a-84eb02212fda" Jan 09 13:46:42 crc kubenswrapper[4919]: E0109 13:46:42.908044 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:afb66a0f8e1aa057888f7c304cc34cfea711805d9d1f05798aceb4029fef2989" Jan 09 13:46:42 crc kubenswrapper[4919]: E0109 13:46:42.908560 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:afb66a0f8e1aa057888f7c304cc34cfea711805d9d1f05798aceb4029fef2989,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64s9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-f6f74d6db-h6cp9_openstack-operators(d0081380-9d2e-40bb-8cc9-f124d4fbfd25): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:46:42 crc kubenswrapper[4919]: E0109 13:46:42.910018 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" podUID="d0081380-9d2e-40bb-8cc9-f124d4fbfd25" Jan 09 13:46:42 crc kubenswrapper[4919]: W0109 13:46:42.916674 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod483268ae_fdeb_41d5_aaa9_20ab30abc131.slice/crio-5da9e81599495187446416930e790044d50c2494586405209d035d4ffe6633f2 WatchSource:0}: Error finding container 5da9e81599495187446416930e790044d50c2494586405209d035d4ffe6633f2: Status 404 returned error can't find the container with id 5da9e81599495187446416930e790044d50c2494586405209d035d4ffe6633f2 Jan 09 13:46:43 crc kubenswrapper[4919]: I0109 13:46:43.421591 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxwmh" event={"ID":"483268ae-fdeb-41d5-aaa9-20ab30abc131","Type":"ContainerStarted","Data":"5da9e81599495187446416930e790044d50c2494586405209d035d4ffe6633f2"} Jan 09 13:46:43 crc kubenswrapper[4919]: E0109 13:46:43.423964 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:afb66a0f8e1aa057888f7c304cc34cfea711805d9d1f05798aceb4029fef2989\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" podUID="d0081380-9d2e-40bb-8cc9-f124d4fbfd25" Jan 09 13:46:43 crc kubenswrapper[4919]: E0109 13:46:43.858121 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04" Jan 09 13:46:43 crc kubenswrapper[4919]: E0109 13:46:43.858313 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kj4qd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-658dd65b86-vvsj9_openstack-operators(7635e70a-4259-4c43-91b7-eae6fc0d3c12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:46:43 crc kubenswrapper[4919]: E0109 13:46:43.859549 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" podUID="7635e70a-4259-4c43-91b7-eae6fc0d3c12" Jan 09 13:46:44 crc kubenswrapper[4919]: E0109 13:46:44.428833 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04\\\"\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" podUID="7635e70a-4259-4c43-91b7-eae6fc0d3c12" Jan 09 13:46:46 crc kubenswrapper[4919]: E0109 13:46:46.537023 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c" Jan 09 13:46:46 crc kubenswrapper[4919]: E0109 13:46:46.537685 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j29l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-568985c78-r5j45_openstack-operators(33efa14f-00b9-49b4-bc2a-5c0c13d60613): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:46:46 crc kubenswrapper[4919]: E0109 13:46:46.538896 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" podUID="33efa14f-00b9-49b4-bc2a-5c0c13d60613" Jan 09 13:46:47 crc kubenswrapper[4919]: E0109 13:46:47.003903 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Jan 09 13:46:47 crc kubenswrapper[4919]: E0109 13:46:47.004106 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-69xjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-jl878_openstack-operators(19ebcfcf-3a6a-4c2c-ab15-2239e08bca09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:46:47 crc kubenswrapper[4919]: E0109 13:46:47.005334 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" podUID="19ebcfcf-3a6a-4c2c-ab15-2239e08bca09" Jan 09 13:46:47 crc kubenswrapper[4919]: E0109 13:46:47.447682 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" podUID="19ebcfcf-3a6a-4c2c-ab15-2239e08bca09" Jan 09 13:46:47 crc kubenswrapper[4919]: E0109 13:46:47.493603 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" podUID="33efa14f-00b9-49b4-bc2a-5c0c13d60613" Jan 09 13:46:47 crc kubenswrapper[4919]: E0109 13:46:47.931405 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c" Jan 09 13:46:47 crc kubenswrapper[4919]: E0109 13:46:47.931639 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2c6qv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-598945d5b8-cd2dq_openstack-operators(53cc8efc-85ec-4ddf-82c5-c1db01fe8120): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:46:47 crc kubenswrapper[4919]: E0109 13:46:47.933060 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" podUID="53cc8efc-85ec-4ddf-82c5-c1db01fe8120" Jan 09 13:46:48 crc kubenswrapper[4919]: E0109 13:46:48.553191 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" podUID="53cc8efc-85ec-4ddf-82c5-c1db01fe8120" Jan 09 13:46:50 crc kubenswrapper[4919]: I0109 13:46:50.461354 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:50 crc kubenswrapper[4919]: I0109 13:46:50.472003 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af1be546-436f-43ef-b748-22860362f61e-cert\") pod \"infra-operator-controller-manager-6d99759cf-6s6wp\" (UID: \"af1be546-436f-43ef-b748-22860362f61e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:50 crc kubenswrapper[4919]: I0109 13:46:50.704581 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lgxt2" Jan 09 13:46:50 crc kubenswrapper[4919]: I0109 13:46:50.713028 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:46:51 crc kubenswrapper[4919]: I0109 13:46:51.247561 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:46:51 crc kubenswrapper[4919]: I0109 13:46:51.247620 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:46:51 crc kubenswrapper[4919]: I0109 13:46:51.247665 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:46:51 crc kubenswrapper[4919]: I0109 13:46:51.248343 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13e0d2bed4a1518fec6fb07c1bdfa49ee9c21e3a9f0774ed8f0f599b03f0f58f"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 13:46:51 crc kubenswrapper[4919]: I0109 13:46:51.248392 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://13e0d2bed4a1518fec6fb07c1bdfa49ee9c21e3a9f0774ed8f0f599b03f0f58f" gracePeriod=600 Jan 09 13:46:51 crc kubenswrapper[4919]: I0109 13:46:51.474576 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="13e0d2bed4a1518fec6fb07c1bdfa49ee9c21e3a9f0774ed8f0f599b03f0f58f" exitCode=0 Jan 09 13:46:51 crc kubenswrapper[4919]: I0109 13:46:51.474631 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"13e0d2bed4a1518fec6fb07c1bdfa49ee9c21e3a9f0774ed8f0f599b03f0f58f"} Jan 09 13:46:51 crc kubenswrapper[4919]: I0109 13:46:51.474680 4919 scope.go:117] "RemoveContainer" containerID="e3fae3f1f51df5d9026154c14d04831020e0e9d6f7bf4af54d35cedb600d3044" Jan 09 13:46:52 crc kubenswrapper[4919]: E0109 13:46:52.589187 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Jan 09 13:46:52 crc kubenswrapper[4919]: E0109 13:46:52.589426 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hnbn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66f8b87655-wxt2z_openstack-operators(b46937ef-2f83-4864-b0d4-5464ed82e1b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:46:52 crc kubenswrapper[4919]: E0109 13:46:52.591321 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" podUID="b46937ef-2f83-4864-b0d4-5464ed82e1b8" Jan 09 13:46:53 crc kubenswrapper[4919]: E0109 13:46:53.265321 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41" Jan 09 13:46:53 crc kubenswrapper[4919]: E0109 13:46:53.265531 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29hgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b88bfc995-9bn9t_openstack-operators(37ea4d3a-1d7d-47b2-8eee-1a7601c2de24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:46:53 crc kubenswrapper[4919]: E0109 13:46:53.266790 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" podUID="37ea4d3a-1d7d-47b2-8eee-1a7601c2de24" Jan 09 13:46:53 crc kubenswrapper[4919]: E0109 13:46:53.556185 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" podUID="37ea4d3a-1d7d-47b2-8eee-1a7601c2de24" Jan 09 13:46:53 crc kubenswrapper[4919]: E0109 13:46:53.556383 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" podUID="b46937ef-2f83-4864-b0d4-5464ed82e1b8" Jan 09 13:46:56 crc kubenswrapper[4919]: E0109 13:46:56.830223 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a" Jan 09 13:46:56 crc kubenswrapper[4919]: E0109 13:46:56.831188 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xbz8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-nk5sx_openstack-operators(4f5bfa64-2b7e-4b30-aedc-56cd44f47032): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:46:56 crc kubenswrapper[4919]: E0109 13:46:56.832366 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" podUID="4f5bfa64-2b7e-4b30-aedc-56cd44f47032" Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.343638 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn"] Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.412418 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k"] Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.427024 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp"] Jan 09 13:46:58 crc kubenswrapper[4919]: W0109 13:46:58.494320 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod488f8708_4c49_429f_9697_a00b8fadd486.slice/crio-ce9a3c070eeb2293d3d458639f61906299e7bb0baa7587cd0c5e04be56309f34 WatchSource:0}: Error finding container ce9a3c070eeb2293d3d458639f61906299e7bb0baa7587cd0c5e04be56309f34: Status 404 returned error can't find the container with id ce9a3c070eeb2293d3d458639f61906299e7bb0baa7587cd0c5e04be56309f34 Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.601483 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64" event={"ID":"782f359d-9941-4528-851a-4db3673cb439","Type":"ContainerStarted","Data":"44d42ee0b7f60dc082bec64b4e53dcdb7d94eaa3c27cdbbac5fb231dbc9563d3"} Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.601939 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64" Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.621150 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"c739bd50573e0da995d79681df6e33456878c7cb345ea26ee42a16e540a49209"} Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.626630 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64" podStartSLOduration=7.020873468 podStartE2EDuration="40.626575111s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.209921447 +0000 UTC m=+960.757760897" lastFinishedPulling="2026-01-09 13:46:54.81562309 +0000 UTC m=+994.363462540" observedRunningTime="2026-01-09 13:46:58.618181344 +0000 UTC m=+998.166020794" watchObservedRunningTime="2026-01-09 13:46:58.626575111 +0000 UTC m=+998.174414561" Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.628424 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" event={"ID":"af1be546-436f-43ef-b748-22860362f61e","Type":"ContainerStarted","Data":"84ddb2376ae43d4ff192f51d6ea5e6139de1553f670858a296fc6ccf22fbcbd0"} Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.638699 4919 generic.go:334] "Generic (PLEG): container finished" podID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerID="acbc87852254becc80cd145b6a74807a6b6b8abae07e0fca95495fbc4c310f16" exitCode=0 Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.638778 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxwmh" event={"ID":"483268ae-fdeb-41d5-aaa9-20ab30abc131","Type":"ContainerDied","Data":"acbc87852254becc80cd145b6a74807a6b6b8abae07e0fca95495fbc4c310f16"} Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.651704 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9" event={"ID":"5bd72cd8-70f2-45ef-a451-8468e79eaca9","Type":"ContainerStarted","Data":"84f2e5ea180ee365b7d1f80dec76409ecd8c690dca599f3a7ca9238b8509b072"} Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.652827 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9" Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.660179 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" event={"ID":"e77d7646-4198-42f3-ac22-f0974b18a0ab","Type":"ContainerStarted","Data":"7b3ff911cdce651ae7d961d73cb51c73bad11992e465a0a5d8df1992dab3ca7a"} Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.662360 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" event={"ID":"488f8708-4c49-429f-9697-a00b8fadd486","Type":"ContainerStarted","Data":"ce9a3c070eeb2293d3d458639f61906299e7bb0baa7587cd0c5e04be56309f34"} Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.663816 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg" event={"ID":"60feaa4f-ca73-4e59-a85f-c17132f8f708","Type":"ContainerStarted","Data":"e9dd94e6ff018093a42ec18488e1379ed5b631923b06a4cf759e35b245609b14"} Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.664404 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg" Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.690358 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg" podStartSLOduration=4.757860302 podStartE2EDuration="40.690340273s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:20.88946661 +0000 UTC m=+960.437306060" lastFinishedPulling="2026-01-09 13:46:56.821946581 +0000 UTC m=+996.369786031" observedRunningTime="2026-01-09 13:46:58.686047707 +0000 UTC m=+998.233887157" watchObservedRunningTime="2026-01-09 13:46:58.690340273 +0000 UTC m=+998.238179723" Jan 09 13:46:58 crc kubenswrapper[4919]: I0109 13:46:58.736031 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9" podStartSLOduration=5.021073644 podStartE2EDuration="40.736010119s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.107813456 +0000 UTC m=+960.655652906" lastFinishedPulling="2026-01-09 13:46:56.822749931 +0000 UTC m=+996.370589381" observedRunningTime="2026-01-09 13:46:58.725434009 +0000 UTC m=+998.273273469" watchObservedRunningTime="2026-01-09 13:46:58.736010119 +0000 UTC m=+998.283849569" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.904076 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" event={"ID":"7c5b2e5b-6474-46f3-861b-aba8d47c714b","Type":"ContainerStarted","Data":"0d8e06239b0a806d871e489aef06177d83ddd9b4a262bba57b889f11065aa71a"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.905718 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.913555 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" event={"ID":"e9f24ed0-e850-4906-901d-b23777cf500f","Type":"ContainerStarted","Data":"081fe29ed27fd7c21b8606ec510af1f1257727306be52999b503f3f4119659be"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.915416 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" event={"ID":"7c1ac56d-4f45-4102-8336-2cec59c44d9d","Type":"ContainerStarted","Data":"f0572fa0fd6cb585cfc12ee4b98bc7d4ff87dbb8dd6e2e28e3b6fe492d1899fc"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.915616 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.917458 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" event={"ID":"58f271ce-d537-4588-ba66-53f08136ee13","Type":"ContainerStarted","Data":"d5112a5d6686b093eafce6cae3cb2c444af236543702cf436aa5ebc1bf34f210"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.917690 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.918544 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7" event={"ID":"7716ced4-dfb9-4a5c-936f-65edbf78f5dd","Type":"ContainerStarted","Data":"ad26f482d26000b1baed39abe6ac4c84816362e0a35c11d1915a32ae65a2aba2"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.918671 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.919554 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" event={"ID":"7635e70a-4259-4c43-91b7-eae6fc0d3c12","Type":"ContainerStarted","Data":"23ce810d62fcff62850a4008386f393821e963ba1a5d806fc0c53b4e5e3b0e9e"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.919907 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.932472 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" event={"ID":"276f41de-c875-40be-816a-84eb02212fda","Type":"ContainerStarted","Data":"fe6872e31e6e0e0415a500caba0d17d789f28ee5553f35351656a25c153047d5"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.932759 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" podStartSLOduration=5.966559005 podStartE2EDuration="41.932738699s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.428246694 +0000 UTC m=+960.976086144" lastFinishedPulling="2026-01-09 13:46:57.394426388 +0000 UTC m=+996.942265838" observedRunningTime="2026-01-09 13:46:59.93155111 +0000 UTC m=+999.479390560" watchObservedRunningTime="2026-01-09 13:46:59.932738699 +0000 UTC m=+999.480578149" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.933013 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.934711 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" event={"ID":"e77d7646-4198-42f3-ac22-f0974b18a0ab","Type":"ContainerStarted","Data":"2888adaf4baf0923e06cf4203424f214610c4ca7f90a0300c5b47d254309c6f0"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.935072 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.944975 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" event={"ID":"d0081380-9d2e-40bb-8cc9-f124d4fbfd25","Type":"ContainerStarted","Data":"4e21be56a72bf9c3dc875db71b0cd0c1b70d66f33365444fccc4021a3795f507"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.945832 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.966483 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8wlj" event={"ID":"ceacd617-f87e-4765-9a75-9cde47b80e8d","Type":"ContainerStarted","Data":"94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.968268 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" event={"ID":"55fe5bfd-cc48-498b-88f7-789a3048a743","Type":"ContainerStarted","Data":"29f67c999ea2a081d1f0b76ff0b91745e80365700979cc377f32728fe07038e1"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.968630 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.969630 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8" event={"ID":"4d08a973-3a9e-4098-95fd-d314d9f4e1af","Type":"ContainerStarted","Data":"a6ec6e55185c5dd662ebb97c30aa79fc1e0d7a03834b904d1cf9ff28ca061456"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.969965 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8" Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.971244 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" event={"ID":"2bf404b6-0f77-4a02-a45a-ad46980755cb","Type":"ContainerStarted","Data":"d4975f428f4392b4e6e22248a54a8c4e3d46a6c716105e136ddfac1085362763"} Jan 09 13:46:59 crc kubenswrapper[4919]: I0109 13:46:59.971552 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:46:59.982746 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" podStartSLOduration=5.015284533 podStartE2EDuration="41.982722272s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:20.990404542 +0000 UTC m=+960.538243992" lastFinishedPulling="2026-01-09 13:46:57.957842281 +0000 UTC m=+997.505681731" observedRunningTime="2026-01-09 13:46:59.977687917 +0000 UTC m=+999.525527377" watchObservedRunningTime="2026-01-09 13:46:59.982722272 +0000 UTC m=+999.530561722" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.042905 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" podStartSLOduration=5.419086727 podStartE2EDuration="42.042890655s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.233993936 +0000 UTC m=+960.781833386" lastFinishedPulling="2026-01-09 13:46:57.857797864 +0000 UTC m=+997.405637314" observedRunningTime="2026-01-09 13:47:00.040934037 +0000 UTC m=+999.588773487" watchObservedRunningTime="2026-01-09 13:47:00.042890655 +0000 UTC m=+999.590730095" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.045413 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7" podStartSLOduration=6.159013604 podStartE2EDuration="42.045406597s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:20.9351991 +0000 UTC m=+960.483038550" lastFinishedPulling="2026-01-09 13:46:56.821592093 +0000 UTC m=+996.369431543" observedRunningTime="2026-01-09 13:47:00.01792572 +0000 UTC m=+999.565765170" watchObservedRunningTime="2026-01-09 13:47:00.045406597 +0000 UTC m=+999.593246037" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.088095 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qg5n7" podStartSLOduration=4.356705076 podStartE2EDuration="41.088073919s" podCreationTimestamp="2026-01-09 13:46:19 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.254098009 +0000 UTC m=+960.801937459" lastFinishedPulling="2026-01-09 13:46:57.985466852 +0000 UTC m=+997.533306302" observedRunningTime="2026-01-09 13:47:00.086796408 +0000 UTC m=+999.634635858" watchObservedRunningTime="2026-01-09 13:47:00.088073919 +0000 UTC m=+999.635913369" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.110090 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" podStartSLOduration=5.581081402 podStartE2EDuration="42.110069852s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.417493111 +0000 UTC m=+960.965332561" lastFinishedPulling="2026-01-09 13:46:57.946481561 +0000 UTC m=+997.494321011" observedRunningTime="2026-01-09 13:47:00.108679857 +0000 UTC m=+999.656519297" watchObservedRunningTime="2026-01-09 13:47:00.110069852 +0000 UTC m=+999.657909302" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.149684 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8" podStartSLOduration=8.275357257 podStartE2EDuration="42.149660978s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:20.941315599 +0000 UTC m=+960.489155049" lastFinishedPulling="2026-01-09 13:46:54.81561932 +0000 UTC m=+994.363458770" observedRunningTime="2026-01-09 13:47:00.139670122 +0000 UTC m=+999.687509572" watchObservedRunningTime="2026-01-09 13:47:00.149660978 +0000 UTC m=+999.697500428" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.188900 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p8wlj" podStartSLOduration=6.24023717 podStartE2EDuration="42.188879315s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.989706785 +0000 UTC m=+961.537546235" lastFinishedPulling="2026-01-09 13:46:57.93834893 +0000 UTC m=+997.486188380" observedRunningTime="2026-01-09 13:47:00.187912621 +0000 UTC m=+999.735752071" watchObservedRunningTime="2026-01-09 13:47:00.188879315 +0000 UTC m=+999.736718765" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.258082 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" podStartSLOduration=5.136670612 podStartE2EDuration="42.258058281s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:20.835901918 +0000 UTC m=+960.383741368" lastFinishedPulling="2026-01-09 13:46:57.957289587 +0000 UTC m=+997.505129037" observedRunningTime="2026-01-09 13:47:00.25560714 +0000 UTC m=+999.803446600" watchObservedRunningTime="2026-01-09 13:47:00.258058281 +0000 UTC m=+999.805897731" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.324312 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" podStartSLOduration=6.148339942 podStartE2EDuration="42.324294954s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.218513257 +0000 UTC m=+960.766352707" lastFinishedPulling="2026-01-09 13:46:57.394468269 +0000 UTC m=+996.942307719" observedRunningTime="2026-01-09 13:47:00.283935999 +0000 UTC m=+999.831775449" watchObservedRunningTime="2026-01-09 13:47:00.324294954 +0000 UTC m=+999.872134414" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.324404 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" podStartSLOduration=5.945152806 podStartE2EDuration="42.324400197s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.431861982 +0000 UTC m=+960.979701432" lastFinishedPulling="2026-01-09 13:46:57.811109373 +0000 UTC m=+997.358948823" observedRunningTime="2026-01-09 13:47:00.321785042 +0000 UTC m=+999.869624492" watchObservedRunningTime="2026-01-09 13:47:00.324400197 +0000 UTC m=+999.872239647" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.361025 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" podStartSLOduration=5.090991709 podStartE2EDuration="42.361005189s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:20.686841627 +0000 UTC m=+960.234681077" lastFinishedPulling="2026-01-09 13:46:57.956855107 +0000 UTC m=+997.504694557" observedRunningTime="2026-01-09 13:47:00.36064451 +0000 UTC m=+999.908483960" watchObservedRunningTime="2026-01-09 13:47:00.361005189 +0000 UTC m=+999.908844639" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.445918 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" podStartSLOduration=42.445902692 podStartE2EDuration="42.445902692s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:47:00.444647671 +0000 UTC m=+999.992487131" watchObservedRunningTime="2026-01-09 13:47:00.445902692 +0000 UTC m=+999.993742142" Jan 09 13:47:00 crc kubenswrapper[4919]: I0109 13:47:00.992687 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxwmh" event={"ID":"483268ae-fdeb-41d5-aaa9-20ab30abc131","Type":"ContainerStarted","Data":"dfc682044f296480d4ae32e051e678fb6c70cf1a33fa1bfd0b64ffdf52c0e7b2"} Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.059674 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" event={"ID":"33efa14f-00b9-49b4-bc2a-5c0c13d60613","Type":"ContainerStarted","Data":"82ac91e059527bfea3dad2a078659b0047189f32f6c0d6faac23ba3054cab180"} Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.060027 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.071836 4919 generic.go:334] "Generic (PLEG): container finished" podID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerID="dfc682044f296480d4ae32e051e678fb6c70cf1a33fa1bfd0b64ffdf52c0e7b2" exitCode=0 Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.071920 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxwmh" event={"ID":"483268ae-fdeb-41d5-aaa9-20ab30abc131","Type":"ContainerDied","Data":"dfc682044f296480d4ae32e051e678fb6c70cf1a33fa1bfd0b64ffdf52c0e7b2"} Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.075713 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" event={"ID":"53cc8efc-85ec-4ddf-82c5-c1db01fe8120","Type":"ContainerStarted","Data":"7cebf117758c6549da95fbe286c0016473da0131dff72f259f9743d23ec39ff8"} Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.076356 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.088191 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" podStartSLOduration=4.133724464 podStartE2EDuration="45.088175607s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:20.994979604 +0000 UTC m=+960.542819054" lastFinishedPulling="2026-01-09 13:47:01.949430747 +0000 UTC m=+1001.497270197" observedRunningTime="2026-01-09 13:47:03.085646034 +0000 UTC m=+1002.633485494" watchObservedRunningTime="2026-01-09 13:47:03.088175607 +0000 UTC m=+1002.636015077" Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.094720 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" event={"ID":"19ebcfcf-3a6a-4c2c-ab15-2239e08bca09","Type":"ContainerStarted","Data":"1d63706a85fcbc22441bcf89e17ae0de2818ba52a90f4254f39253a091fe5379"} Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.095376 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.377499 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" podStartSLOduration=4.407376445 podStartE2EDuration="45.37747588s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:20.96953632 +0000 UTC m=+960.517375760" lastFinishedPulling="2026-01-09 13:47:01.939635745 +0000 UTC m=+1001.487475195" observedRunningTime="2026-01-09 13:47:03.375261126 +0000 UTC m=+1002.923100576" watchObservedRunningTime="2026-01-09 13:47:03.37747588 +0000 UTC m=+1002.925315330" Jan 09 13:47:03 crc kubenswrapper[4919]: I0109 13:47:03.407528 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" podStartSLOduration=4.5777432529999995 podStartE2EDuration="45.407498051s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.108263767 +0000 UTC m=+960.656103217" lastFinishedPulling="2026-01-09 13:47:01.938018565 +0000 UTC m=+1001.485858015" observedRunningTime="2026-01-09 13:47:03.402793325 +0000 UTC m=+1002.950632785" watchObservedRunningTime="2026-01-09 13:47:03.407498051 +0000 UTC m=+1002.955337511" Jan 09 13:47:05 crc kubenswrapper[4919]: I0109 13:47:05.486449 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5fb94578dd-p4xfn" Jan 09 13:47:06 crc kubenswrapper[4919]: I0109 13:47:06.439973 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxwmh" event={"ID":"483268ae-fdeb-41d5-aaa9-20ab30abc131","Type":"ContainerStarted","Data":"a4ea53963ca2b1238d58069a86ad3d500659a0a72a9cf1002f321fce53c92a6f"} Jan 09 13:47:06 crc kubenswrapper[4919]: I0109 13:47:06.477116 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sxwmh" podStartSLOduration=38.599818258 podStartE2EDuration="44.477090021s" podCreationTimestamp="2026-01-09 13:46:22 +0000 UTC" firstStartedPulling="2026-01-09 13:46:58.671315904 +0000 UTC m=+998.219155354" lastFinishedPulling="2026-01-09 13:47:04.548587667 +0000 UTC m=+1004.096427117" observedRunningTime="2026-01-09 13:47:06.471607436 +0000 UTC m=+1006.019446886" watchObservedRunningTime="2026-01-09 13:47:06.477090021 +0000 UTC m=+1006.024929481" Jan 09 13:47:07 crc kubenswrapper[4919]: E0109 13:47:07.754118 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" podUID="4f5bfa64-2b7e-4b30-aedc-56cd44f47032" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.341424 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.341753 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.416148 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.447631 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-h6cp9" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.458825 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-m56bk" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.518087 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-s46b7" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.544704 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.559867 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-vvsj9" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.584769 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-f2drg" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.643541 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p8wlj"] Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.796767 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4r7j8" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.842404 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-r5j45" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.844080 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-cd2dq" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.897662 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-jl5xm" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.936570 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-jl878" Jan 09 13:47:08 crc kubenswrapper[4919]: I0109 13:47:08.970500 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-4ppq5" Jan 09 13:47:09 crc kubenswrapper[4919]: I0109 13:47:09.036089 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wnwmg" Jan 09 13:47:09 crc kubenswrapper[4919]: I0109 13:47:09.055319 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-8kjrk" Jan 09 13:47:09 crc kubenswrapper[4919]: I0109 13:47:09.158646 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-wzww9" Jan 09 13:47:09 crc kubenswrapper[4919]: I0109 13:47:09.192456 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-84x8m" Jan 09 13:47:09 crc kubenswrapper[4919]: I0109 13:47:09.390104 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-47s64" Jan 09 13:47:10 crc kubenswrapper[4919]: I0109 13:47:10.464670 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p8wlj" podUID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerName="registry-server" containerID="cri-o://94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1" gracePeriod=2 Jan 09 13:47:12 crc kubenswrapper[4919]: I0109 13:47:12.481876 4919 generic.go:334] "Generic (PLEG): container finished" podID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerID="94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1" exitCode=0 Jan 09 13:47:12 crc kubenswrapper[4919]: I0109 13:47:12.481941 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8wlj" event={"ID":"ceacd617-f87e-4765-9a75-9cde47b80e8d","Type":"ContainerDied","Data":"94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1"} Jan 09 13:47:12 crc kubenswrapper[4919]: I0109 13:47:12.977162 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:47:12 crc kubenswrapper[4919]: I0109 13:47:12.978349 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:47:13 crc kubenswrapper[4919]: I0109 13:47:13.018255 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:47:13 crc kubenswrapper[4919]: I0109 13:47:13.557132 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:47:13 crc kubenswrapper[4919]: I0109 13:47:13.609249 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxwmh"] Jan 09 13:47:15 crc kubenswrapper[4919]: I0109 13:47:15.504634 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sxwmh" podUID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerName="registry-server" containerID="cri-o://a4ea53963ca2b1238d58069a86ad3d500659a0a72a9cf1002f321fce53c92a6f" gracePeriod=2 Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.179845 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:0144c53f5c318a2a2a690f358f5574fd4c1bd580e75e738cea935f8df95e52a9" Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.180539 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:0144c53f5c318a2a2a690f358f5574fd4c1bd580e75e738cea935f8df95e52a9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ws6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-6d99759cf-6s6wp_openstack-operators(af1be546-436f-43ef-b748-22860362f61e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.181896 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" podUID="af1be546-436f-43ef-b748-22860362f61e" Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.342719 4919 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1 is running failed: container process not found" containerID="94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1" cmd=["grpc_health_probe","-addr=:50051"] Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.343187 4919 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1 is running failed: container process not found" containerID="94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1" cmd=["grpc_health_probe","-addr=:50051"] Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.343612 4919 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1 is running failed: container process not found" containerID="94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1" cmd=["grpc_health_probe","-addr=:50051"] Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.343653 4919 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-p8wlj" podUID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerName="registry-server" Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.523059 4919 generic.go:334] "Generic (PLEG): container finished" podID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerID="a4ea53963ca2b1238d58069a86ad3d500659a0a72a9cf1002f321fce53c92a6f" exitCode=0 Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.523131 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxwmh" event={"ID":"483268ae-fdeb-41d5-aaa9-20ab30abc131","Type":"ContainerDied","Data":"a4ea53963ca2b1238d58069a86ad3d500659a0a72a9cf1002f321fce53c92a6f"} Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.524676 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:0144c53f5c318a2a2a690f358f5574fd4c1bd580e75e738cea935f8df95e52a9\\\"\"" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" podUID="af1be546-436f-43ef-b748-22860362f61e" Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.860916 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:5d09c9ffa6ee479724f6da786cb35902b87578365dac2035c222f5e4f752d208" Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.861508 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:5d09c9ffa6ee479724f6da786cb35902b87578365dac2035c222f5e4f752d208,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:b7bbe532fcc96c2fc5c78733071a006ab1cd35222150227d2e7990392e533661,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:d33c6f4faaddd2bce42ac0c9d33ac0ce4fc17255b678413ff8b40a549a686d6b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:36946a77001110f391fb254ec77129803a6b7c34dacfa1a4c8c51aa8d23d57c5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:dd58b29b5d88662a621c685c2b76fe8a71cc9e82aa85dff22a66182a6ceef3ae,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:fc47ed1c6249c9f6ef13ef1eac82d5a34819a715dea5117d33df0d0dc69ace8b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:e21d35c272d016f4dbd323dc827ee83538c96674adfb188e362aa652ce167b61,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:58b583bb82da64c3c962ed2ca5e60dfff0fc93e50a9ec95e650cecb3a6cb8fda,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:c2ace235f775334be02d78928802b76309543e869cc6b4b55843ee546691e6c3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:be77cc58b87f299b42bb2cbe74f3f8d028b8c887851a53209441b60e1363aeb5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:41dc9cf27a902d9c7b392d730bd761cf3c391a548a841e9e4d38e1571f3c53bf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:174f8f712eb5fdda5061a1a68624befb27bbe766842653788583ec74c5ae506a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b8d76f96b6f17a3318d089c0b5c0e6c292d969ab392cdcc708ec0f0188c953ae,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:43c55407c7c9b4141482533546e6570535373f7e36df374dfbbe388293c19dbf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:097816f289af117f14cd8ee1678a9635e8da6de4a1bde834d02199c4ef65c5c0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:b0caf63f3b77110a76a474f1e4c1ea339017cf17715bbfa52140f9fd0b91cdfc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:c311fe3e1a4373216a506a4c0c0ef295a84809a40ebf86c803ee525b5fe9e120,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:281668af8ed34c2464f3593d350cf7b695b41b81f40cc539ad74b7b65822afb9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:84319e5dd6569ea531e64b688557c2a2e20deb5225f3d349e402e34858f00fe7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:acb53e0e210562091843c212bc0cf5541daacd6f2bd18923430bae8c36578731,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:be6f4002842ebadf30d035721567a7e669f12a6eef8c00dc89030b3b08f3dd2c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:988635be61f6ed8c0d707622193b7efe8e9b1dc7effbf9b09d2db5ec593b59e7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:63e08752678a68571e1c54ceea42c113af493a04cdc22198a3713df7b53f87e5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:6741d06b0f1bbeb2968807dc5be45853cdd3dfb9cc7ea6ef23e909ae24f3cbf4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:1803a36d1a397a5595dddb4a2f791ab9443d3af97391a53928fa495ca7032d93,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:d163fcf801d67d9c67b2ae4368675b75714db7c531de842aad43979a888c5d57,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:15bf81d933a44128cb6f3264632a9563337eb3bfe82c4a33c746595467d3b0c3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:3a08e21338f651a90ee83ae46242b8c80c64488144f27a77848517049c3a8f5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:ebeb4443ab9f9360925f7abd9c24b7a453390d678f79ed247d2042dcc6f9c3fc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:04bb4cd601b08034c6cba18e701fcd36026ec4340402ed710a0bbd09d8e4884d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:27b80783b7d4658d89dda9a09924e9ee472908a8fa1c86bcf3f773d17a4196e0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:8cb133c5a5551e1aa11ef3326149db1babbf00924d0ff493ebe3346b69fd4b5b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:13c3567176bb2d033f6c6b30e20404bd67a217e2537210bf222f3afe0c8619b7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:60ac3446d57f1a97a6ca2d8e6584b00aa18704bc2707a7ac1a6a28c6d685d215,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:7e7788d1aae251e60f4012870140c65bce9760cd27feaeec5f65c42fe4ffce77,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:6a401117007514660c694248adce8136d83559caf1b38e475935335e09ac954a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:364d50f873551805782c23264570eff40e3807f35d9bccdd456515b4e31da488,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:2d72dd490576e0cb670d21a08420888f3758d64ed0cbd2ef8b9aa8488ad2ce40,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:96fdf7cddf31509ee63950a9d61320d0b01beb1212e28f37a6e872d6589ded22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:8b7534a2999075f919fc162d21f76026e8bf781913cc3d2ac07e484e9b2fc596,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:d65eaaea2ab02d63af9d8a106619908fa01a2e56bd6753edc5590e66e46270db,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:d042d7f91bafb002affff8cf750d694a0da129377255c502028528fe2280e790,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:a8faef9ea5e8ef8327b7fbb9b9cafc74c38c09c7e3b2365a7cad5eb49766f71d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:88aa46ea03a5584560806aa4b093584fda6b2f54c562005b72be2e3615688090,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:c08ecdfb7638c1897004347d835bdbabacff40a345f64c2b3111c377096bfa56,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:8b4025a4f30e83acc0b51ac063eea701006a302a1acbdec53f54b540270887f7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:4992f5ddbd20cca07e750846b2dbe7c51c5766c3002c388f8d8a158e347ec63d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:20b3ad38accb9eb8849599280a263d3436a5af03d89645e5ec4508586297ffde,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:378ed518b68ea809cffa2ff7a93d51e52cfc53af14eedc978924fdabccef0325,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:8c3632033f8c004f31a1c7c57c5ca7b450a11e9170a220b8943b57f80717c70c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:3f746f7c6a8c48c0f4a800dcb4bc49bfbc4de4a9ca6a55d8f22bc515a92ea1d9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:e1f7bf105190c3cbbfcf0aeeb77a92d1466100ba8377221ed5eee228949e05bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:954b4c60705b229a968aba3b5b35ab02759378706103ed1189fae3e3316fac35,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:f2e0025727efb95efa65e6af6338ae3fc79bf61095d6d54931a0be8d7fe9acac,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:6965f656a2712be18219bf47c45b31085032383eed3e09cbf2491ae5b2211ce0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:194121c2d79401bd41f75428a437fe32a5806a6a160f7d80798ff66baed9afa5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:947c1bb9373b7d3f2acea104a5666e394c830111bf80d133f1fe7238e4d06f28,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:425ebddc9d6851ee9c730e67eaf43039943dc7937fb11332a41335a9114b2d44,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:bea03c7c34dc6ef8bc163e12a8940011b8feebc44a2efaaba2d3c4c6c515d6c8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:a2280bc80b454dc9e5c95daf74b8a53d6f9e42fc16d45287e089fc41014fe1da,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:88d687a7bb593b2e61598b422baba84d67c114419590a6d83d15327d119ce208,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:2635e02b99d380b2e547013c09c6c8da01bc89b3d3ce570e4d8f8656c7635b0e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:ac7fefe1c93839c7ccb2aaa0a18751df0e9f64a36a3b4cc1b81d82d7774b8b45,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:a357cf166caaeea230f8a912aceb042e3170c5d680844e8f97b936baa10834ed,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:79c9efc6a45fa22aeaff8485be7103b90ddb87c9142e851405e25df6655487e2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:3cf9b4c9342d559b2c1ba8124e5c06fb01c7ce2706bab6bd8adbdec983ecc9ce,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:bf0297d2832b9bbe3a8eb5b8ff517b3d2a7ce6ba68f224e743d9943f55f727e2,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tc6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-75f6ff484-ll94k_openstack-operators(488f8708-4c49-429f-9697-a00b8fadd486): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:47:18 crc kubenswrapper[4919]: E0109 13:47:18.863356 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" podUID="488f8708-4c49-429f-9697-a00b8fadd486" Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.936905 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.945467 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.988412 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-utilities\") pod \"483268ae-fdeb-41d5-aaa9-20ab30abc131\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.988501 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txklq\" (UniqueName: \"kubernetes.io/projected/483268ae-fdeb-41d5-aaa9-20ab30abc131-kube-api-access-txklq\") pod \"483268ae-fdeb-41d5-aaa9-20ab30abc131\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.988570 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-catalog-content\") pod \"ceacd617-f87e-4765-9a75-9cde47b80e8d\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.988637 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-utilities\") pod \"ceacd617-f87e-4765-9a75-9cde47b80e8d\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.988673 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-catalog-content\") pod \"483268ae-fdeb-41d5-aaa9-20ab30abc131\" (UID: \"483268ae-fdeb-41d5-aaa9-20ab30abc131\") " Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.988717 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgnlp\" (UniqueName: \"kubernetes.io/projected/ceacd617-f87e-4765-9a75-9cde47b80e8d-kube-api-access-xgnlp\") pod \"ceacd617-f87e-4765-9a75-9cde47b80e8d\" (UID: \"ceacd617-f87e-4765-9a75-9cde47b80e8d\") " Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.990397 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-utilities" (OuterVolumeSpecName: "utilities") pod "ceacd617-f87e-4765-9a75-9cde47b80e8d" (UID: "ceacd617-f87e-4765-9a75-9cde47b80e8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.994646 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-utilities" (OuterVolumeSpecName: "utilities") pod "483268ae-fdeb-41d5-aaa9-20ab30abc131" (UID: "483268ae-fdeb-41d5-aaa9-20ab30abc131"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:47:18 crc kubenswrapper[4919]: I0109 13:47:18.996696 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483268ae-fdeb-41d5-aaa9-20ab30abc131-kube-api-access-txklq" (OuterVolumeSpecName: "kube-api-access-txklq") pod "483268ae-fdeb-41d5-aaa9-20ab30abc131" (UID: "483268ae-fdeb-41d5-aaa9-20ab30abc131"). InnerVolumeSpecName "kube-api-access-txklq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.006491 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceacd617-f87e-4765-9a75-9cde47b80e8d-kube-api-access-xgnlp" (OuterVolumeSpecName: "kube-api-access-xgnlp") pod "ceacd617-f87e-4765-9a75-9cde47b80e8d" (UID: "ceacd617-f87e-4765-9a75-9cde47b80e8d"). InnerVolumeSpecName "kube-api-access-xgnlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.028679 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "483268ae-fdeb-41d5-aaa9-20ab30abc131" (UID: "483268ae-fdeb-41d5-aaa9-20ab30abc131"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.040357 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ceacd617-f87e-4765-9a75-9cde47b80e8d" (UID: "ceacd617-f87e-4765-9a75-9cde47b80e8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.090072 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.090114 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.090145 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgnlp\" (UniqueName: \"kubernetes.io/projected/ceacd617-f87e-4765-9a75-9cde47b80e8d-kube-api-access-xgnlp\") on node \"crc\" DevicePath \"\"" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.090160 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483268ae-fdeb-41d5-aaa9-20ab30abc131-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.090172 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txklq\" (UniqueName: \"kubernetes.io/projected/483268ae-fdeb-41d5-aaa9-20ab30abc131-kube-api-access-txklq\") on node \"crc\" DevicePath \"\"" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.090182 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceacd617-f87e-4765-9a75-9cde47b80e8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.530735 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" event={"ID":"b46937ef-2f83-4864-b0d4-5464ed82e1b8","Type":"ContainerStarted","Data":"819741f2dd8420c693b0bbedd05baa74870e1bb60ce4daf50666d4adb4d4723a"} Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.530963 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.532782 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxwmh" event={"ID":"483268ae-fdeb-41d5-aaa9-20ab30abc131","Type":"ContainerDied","Data":"5da9e81599495187446416930e790044d50c2494586405209d035d4ffe6633f2"} Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.532803 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxwmh" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.532836 4919 scope.go:117] "RemoveContainer" containerID="a4ea53963ca2b1238d58069a86ad3d500659a0a72a9cf1002f321fce53c92a6f" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.534878 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8wlj" event={"ID":"ceacd617-f87e-4765-9a75-9cde47b80e8d","Type":"ContainerDied","Data":"bdd5d28e318017e60d5ef9420e7d32aee1a13d9ce653722a7f1a73333065c46c"} Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.534924 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8wlj" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.536791 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" event={"ID":"37ea4d3a-1d7d-47b2-8eee-1a7601c2de24","Type":"ContainerStarted","Data":"93ae28499b2ab4ac610bc6416f345b9de4d52d1145645cf8e4833e8247d9d736"} Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.537002 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" Jan 09 13:47:19 crc kubenswrapper[4919]: E0109 13:47:19.538128 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:5d09c9ffa6ee479724f6da786cb35902b87578365dac2035c222f5e4f752d208\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" podUID="488f8708-4c49-429f-9697-a00b8fadd486" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.550846 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" podStartSLOduration=3.558041796 podStartE2EDuration="1m1.550818906s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:20.889107201 +0000 UTC m=+960.436946651" lastFinishedPulling="2026-01-09 13:47:18.881884311 +0000 UTC m=+1018.429723761" observedRunningTime="2026-01-09 13:47:19.545520885 +0000 UTC m=+1019.093360335" watchObservedRunningTime="2026-01-09 13:47:19.550818906 +0000 UTC m=+1019.098658376" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.551173 4919 scope.go:117] "RemoveContainer" containerID="dfc682044f296480d4ae32e051e678fb6c70cf1a33fa1bfd0b64ffdf52c0e7b2" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.577837 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" podStartSLOduration=4.053401213 podStartE2EDuration="1m1.577820861s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.404071912 +0000 UTC m=+960.951911362" lastFinishedPulling="2026-01-09 13:47:18.92849156 +0000 UTC m=+1018.476331010" observedRunningTime="2026-01-09 13:47:19.57369669 +0000 UTC m=+1019.121536150" watchObservedRunningTime="2026-01-09 13:47:19.577820861 +0000 UTC m=+1019.125660311" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.586193 4919 scope.go:117] "RemoveContainer" containerID="acbc87852254becc80cd145b6a74807a6b6b8abae07e0fca95495fbc4c310f16" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.644505 4919 scope.go:117] "RemoveContainer" containerID="94c52a925b65b1a2ed42fe43639bd62fad226648d791e7c3bdd8aa98ca9ee0b1" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.645761 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p8wlj"] Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.650915 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p8wlj"] Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.665146 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxwmh"] Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.666393 4919 scope.go:117] "RemoveContainer" containerID="300b1a41701269c261475da24a0084827d458bac4ea9f1c4b9920a4bb658b7dd" Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.671811 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxwmh"] Jan 09 13:47:19 crc kubenswrapper[4919]: I0109 13:47:19.682991 4919 scope.go:117] "RemoveContainer" containerID="8d305180bc3018e0a7d4ecf9c50d6730c701f23859953097b523228ca6a35a64" Jan 09 13:47:20 crc kubenswrapper[4919]: I0109 13:47:20.765506 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483268ae-fdeb-41d5-aaa9-20ab30abc131" path="/var/lib/kubelet/pods/483268ae-fdeb-41d5-aaa9-20ab30abc131/volumes" Jan 09 13:47:20 crc kubenswrapper[4919]: I0109 13:47:20.766655 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceacd617-f87e-4765-9a75-9cde47b80e8d" path="/var/lib/kubelet/pods/ceacd617-f87e-4765-9a75-9cde47b80e8d/volumes" Jan 09 13:47:23 crc kubenswrapper[4919]: I0109 13:47:23.568479 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" event={"ID":"4f5bfa64-2b7e-4b30-aedc-56cd44f47032","Type":"ContainerStarted","Data":"3d42f519649d366ab1e2913854b4a51046aad0bb0526280e7e8f4b2d19f09f8b"} Jan 09 13:47:23 crc kubenswrapper[4919]: I0109 13:47:23.569795 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" Jan 09 13:47:23 crc kubenswrapper[4919]: I0109 13:47:23.597171 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" podStartSLOduration=3.784263734 podStartE2EDuration="1m5.597154791s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:21.426600694 +0000 UTC m=+960.974440164" lastFinishedPulling="2026-01-09 13:47:23.239491731 +0000 UTC m=+1022.787331221" observedRunningTime="2026-01-09 13:47:23.595443609 +0000 UTC m=+1023.143283069" watchObservedRunningTime="2026-01-09 13:47:23.597154791 +0000 UTC m=+1023.144994241" Jan 09 13:47:28 crc kubenswrapper[4919]: I0109 13:47:28.515414 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-wxt2z" Jan 09 13:47:28 crc kubenswrapper[4919]: I0109 13:47:28.857815 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9bn9t" Jan 09 13:47:29 crc kubenswrapper[4919]: I0109 13:47:29.576423 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-nk5sx" Jan 09 13:47:35 crc kubenswrapper[4919]: I0109 13:47:35.655928 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" event={"ID":"488f8708-4c49-429f-9697-a00b8fadd486","Type":"ContainerStarted","Data":"3d2eb6f6b1237364e29b7032e94e9d3a678f55b148a6acc9573b2af858c00168"} Jan 09 13:47:35 crc kubenswrapper[4919]: I0109 13:47:35.656731 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:47:35 crc kubenswrapper[4919]: I0109 13:47:35.657144 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" event={"ID":"af1be546-436f-43ef-b748-22860362f61e","Type":"ContainerStarted","Data":"f35cd5b59b76eb61b3d129e05e007f6cf01daa59bdfbc7ff77ceea82c1f24603"} Jan 09 13:47:35 crc kubenswrapper[4919]: I0109 13:47:35.657297 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:47:35 crc kubenswrapper[4919]: I0109 13:47:35.694558 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" podStartSLOduration=40.867992894 podStartE2EDuration="1m17.694536781s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:58.497774375 +0000 UTC m=+998.045613825" lastFinishedPulling="2026-01-09 13:47:35.324318252 +0000 UTC m=+1034.872157712" observedRunningTime="2026-01-09 13:47:35.687919298 +0000 UTC m=+1035.235758758" watchObservedRunningTime="2026-01-09 13:47:35.694536781 +0000 UTC m=+1035.242376241" Jan 09 13:47:35 crc kubenswrapper[4919]: I0109 13:47:35.707129 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" podStartSLOduration=40.914141102 podStartE2EDuration="1m17.707100431s" podCreationTimestamp="2026-01-09 13:46:18 +0000 UTC" firstStartedPulling="2026-01-09 13:46:58.530902842 +0000 UTC m=+998.078742292" lastFinishedPulling="2026-01-09 13:47:35.323862141 +0000 UTC m=+1034.871701621" observedRunningTime="2026-01-09 13:47:35.704752683 +0000 UTC m=+1035.252592133" watchObservedRunningTime="2026-01-09 13:47:35.707100431 +0000 UTC m=+1035.254939881" Jan 09 13:47:40 crc kubenswrapper[4919]: I0109 13:47:40.722592 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-6s6wp" Jan 09 13:47:44 crc kubenswrapper[4919]: I0109 13:47:44.934008 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.815889 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-2chl6"] Jan 09 13:48:02 crc kubenswrapper[4919]: E0109 13:48:02.816631 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerName="registry-server" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.816644 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerName="registry-server" Jan 09 13:48:02 crc kubenswrapper[4919]: E0109 13:48:02.816659 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerName="extract-content" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.816665 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerName="extract-content" Jan 09 13:48:02 crc kubenswrapper[4919]: E0109 13:48:02.816673 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerName="registry-server" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.816680 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerName="registry-server" Jan 09 13:48:02 crc kubenswrapper[4919]: E0109 13:48:02.816691 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerName="extract-content" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.816696 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerName="extract-content" Jan 09 13:48:02 crc kubenswrapper[4919]: E0109 13:48:02.816707 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerName="extract-utilities" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.816713 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerName="extract-utilities" Jan 09 13:48:02 crc kubenswrapper[4919]: E0109 13:48:02.816728 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerName="extract-utilities" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.816734 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerName="extract-utilities" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.816858 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceacd617-f87e-4765-9a75-9cde47b80e8d" containerName="registry-server" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.816878 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="483268ae-fdeb-41d5-aaa9-20ab30abc131" containerName="registry-server" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.817617 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.824830 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.824985 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.825075 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.825443 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bfkck" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.827255 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-2chl6"] Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.946973 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lrsz7"] Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.948160 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.956570 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 09 13:48:02 crc kubenswrapper[4919]: I0109 13:48:02.977976 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lrsz7"] Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.014732 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-config\") pod \"dnsmasq-dns-84bb9d8bd9-2chl6\" (UID: \"ca1399ee-0254-4313-83d7-bb6cdaef3dd7\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.014805 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjhz5\" (UniqueName: \"kubernetes.io/projected/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-kube-api-access-pjhz5\") pod \"dnsmasq-dns-84bb9d8bd9-2chl6\" (UID: \"ca1399ee-0254-4313-83d7-bb6cdaef3dd7\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.116112 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjhz5\" (UniqueName: \"kubernetes.io/projected/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-kube-api-access-pjhz5\") pod \"dnsmasq-dns-84bb9d8bd9-2chl6\" (UID: \"ca1399ee-0254-4313-83d7-bb6cdaef3dd7\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.116233 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-config\") pod \"dnsmasq-dns-5f854695bc-lrsz7\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.116311 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-config\") pod \"dnsmasq-dns-84bb9d8bd9-2chl6\" (UID: \"ca1399ee-0254-4313-83d7-bb6cdaef3dd7\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.116340 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-dns-svc\") pod \"dnsmasq-dns-5f854695bc-lrsz7\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.116382 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkl8r\" (UniqueName: \"kubernetes.io/projected/6cc1a82c-965c-4dce-b771-f77983e88d20-kube-api-access-hkl8r\") pod \"dnsmasq-dns-5f854695bc-lrsz7\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.117485 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-config\") pod \"dnsmasq-dns-84bb9d8bd9-2chl6\" (UID: \"ca1399ee-0254-4313-83d7-bb6cdaef3dd7\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.150999 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjhz5\" (UniqueName: \"kubernetes.io/projected/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-kube-api-access-pjhz5\") pod \"dnsmasq-dns-84bb9d8bd9-2chl6\" (UID: \"ca1399ee-0254-4313-83d7-bb6cdaef3dd7\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.217925 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-config\") pod \"dnsmasq-dns-5f854695bc-lrsz7\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.218012 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-dns-svc\") pod \"dnsmasq-dns-5f854695bc-lrsz7\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.218036 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkl8r\" (UniqueName: \"kubernetes.io/projected/6cc1a82c-965c-4dce-b771-f77983e88d20-kube-api-access-hkl8r\") pod \"dnsmasq-dns-5f854695bc-lrsz7\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.218853 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-config\") pod \"dnsmasq-dns-5f854695bc-lrsz7\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.218926 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-dns-svc\") pod \"dnsmasq-dns-5f854695bc-lrsz7\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.238778 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkl8r\" (UniqueName: \"kubernetes.io/projected/6cc1a82c-965c-4dce-b771-f77983e88d20-kube-api-access-hkl8r\") pod \"dnsmasq-dns-5f854695bc-lrsz7\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.314100 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.435979 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.659774 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-2chl6"] Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.730271 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lrsz7"] Jan 09 13:48:03 crc kubenswrapper[4919]: W0109 13:48:03.730850 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cc1a82c_965c_4dce_b771_f77983e88d20.slice/crio-91f11f4cf168e85864357a13800e1326b42d468057df79deef7bde41d98f79a3 WatchSource:0}: Error finding container 91f11f4cf168e85864357a13800e1326b42d468057df79deef7bde41d98f79a3: Status 404 returned error can't find the container with id 91f11f4cf168e85864357a13800e1326b42d468057df79deef7bde41d98f79a3 Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.868302 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" event={"ID":"6cc1a82c-965c-4dce-b771-f77983e88d20","Type":"ContainerStarted","Data":"91f11f4cf168e85864357a13800e1326b42d468057df79deef7bde41d98f79a3"} Jan 09 13:48:03 crc kubenswrapper[4919]: I0109 13:48:03.870316 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" event={"ID":"ca1399ee-0254-4313-83d7-bb6cdaef3dd7","Type":"ContainerStarted","Data":"3e45b7b0bdb7fd190a92af37ecfc149db975dbaf047da1541f9fff4ce5bd8ccf"} Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.812528 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lrsz7"] Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.836395 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vf9kv"] Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.837862 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.862566 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vf9kv"] Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.883261 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-config\") pod \"dnsmasq-dns-744ffd65bc-vf9kv\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.883418 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rqb\" (UniqueName: \"kubernetes.io/projected/3e5fc04e-7f35-4d71-a257-e6d492c2d399-kube-api-access-x7rqb\") pod \"dnsmasq-dns-744ffd65bc-vf9kv\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.883485 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-vf9kv\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.985370 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-config\") pod \"dnsmasq-dns-744ffd65bc-vf9kv\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.985457 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7rqb\" (UniqueName: \"kubernetes.io/projected/3e5fc04e-7f35-4d71-a257-e6d492c2d399-kube-api-access-x7rqb\") pod \"dnsmasq-dns-744ffd65bc-vf9kv\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.985496 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-vf9kv\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.986596 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-vf9kv\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:05 crc kubenswrapper[4919]: I0109 13:48:05.987189 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-config\") pod \"dnsmasq-dns-744ffd65bc-vf9kv\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.025187 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7rqb\" (UniqueName: \"kubernetes.io/projected/3e5fc04e-7f35-4d71-a257-e6d492c2d399-kube-api-access-x7rqb\") pod \"dnsmasq-dns-744ffd65bc-vf9kv\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.235737 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.292061 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-2chl6"] Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.368769 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kdzww"] Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.370187 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.383022 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kdzww"] Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.456769 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9kg\" (UniqueName: \"kubernetes.io/projected/b76bf527-b2ca-4359-90f2-b9fdf5767d66-kube-api-access-pk9kg\") pod \"dnsmasq-dns-95f5f6995-kdzww\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.457127 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-dns-svc\") pod \"dnsmasq-dns-95f5f6995-kdzww\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.457250 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-config\") pod \"dnsmasq-dns-95f5f6995-kdzww\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.558548 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-dns-svc\") pod \"dnsmasq-dns-95f5f6995-kdzww\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.558602 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-config\") pod \"dnsmasq-dns-95f5f6995-kdzww\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.558633 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk9kg\" (UniqueName: \"kubernetes.io/projected/b76bf527-b2ca-4359-90f2-b9fdf5767d66-kube-api-access-pk9kg\") pod \"dnsmasq-dns-95f5f6995-kdzww\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.559683 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-config\") pod \"dnsmasq-dns-95f5f6995-kdzww\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.559730 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-dns-svc\") pod \"dnsmasq-dns-95f5f6995-kdzww\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.581087 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk9kg\" (UniqueName: \"kubernetes.io/projected/b76bf527-b2ca-4359-90f2-b9fdf5767d66-kube-api-access-pk9kg\") pod \"dnsmasq-dns-95f5f6995-kdzww\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.708990 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.945424 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vf9kv"] Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.983891 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.985535 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.988754 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-n9dll" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.988928 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.992820 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.992850 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.992892 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.992850 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 09 13:48:06 crc kubenswrapper[4919]: I0109 13:48:06.994047 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.004724 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.189781 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.189837 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba39e0c2-1804-45a7-9dd1-2c20f229b648-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.189876 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.189921 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.189969 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh5gt\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-kube-api-access-xh5gt\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.190000 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.190039 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba39e0c2-1804-45a7-9dd1-2c20f229b648-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.190068 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.190088 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.190103 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.190122 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291184 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291247 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291264 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291291 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba39e0c2-1804-45a7-9dd1-2c20f229b648-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291309 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291331 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291355 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291385 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh5gt\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-kube-api-access-xh5gt\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291407 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291453 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba39e0c2-1804-45a7-9dd1-2c20f229b648-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291478 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291927 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.291997 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.292097 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.292822 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.292844 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.292900 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.297837 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba39e0c2-1804-45a7-9dd1-2c20f229b648-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.297967 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.316174 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba39e0c2-1804-45a7-9dd1-2c20f229b648-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.335936 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh5gt\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-kube-api-access-xh5gt\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.339437 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.348135 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kdzww"] Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.317097 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: W0109 13:48:07.358074 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb76bf527_b2ca_4359_90f2_b9fdf5767d66.slice/crio-e20a15d6255525614d6c8d20914fa2414d8b8e16b46c6ac3d712f4bfa8fc73d3 WatchSource:0}: Error finding container e20a15d6255525614d6c8d20914fa2414d8b8e16b46c6ac3d712f4bfa8fc73d3: Status 404 returned error can't find the container with id e20a15d6255525614d6c8d20914fa2414d8b8e16b46c6ac3d712f4bfa8fc73d3 Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.491315 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.492868 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.495955 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.496110 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.496277 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.496423 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-x76gb" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.498311 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.498477 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.498614 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.524879 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597148 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597233 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597425 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597592 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597625 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597659 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dffvn\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-kube-api-access-dffvn\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597688 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b80a84d-c869-407b-b3d2-3be828183ae5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597738 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597837 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597864 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.597952 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b80a84d-c869-407b-b3d2-3be828183ae5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.640879 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.700673 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.700733 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.700780 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b80a84d-c869-407b-b3d2-3be828183ae5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.700821 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.701045 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.701323 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.700866 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.701680 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.701731 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.701759 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.701792 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dffvn\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-kube-api-access-dffvn\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.701819 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b80a84d-c869-407b-b3d2-3be828183ae5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.701864 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.702186 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.702907 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.703748 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.703816 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.706789 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.707508 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b80a84d-c869-407b-b3d2-3be828183ae5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.708652 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b80a84d-c869-407b-b3d2-3be828183ae5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.712278 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.732903 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.735608 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dffvn\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-kube-api-access-dffvn\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.864651 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.921066 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" event={"ID":"3e5fc04e-7f35-4d71-a257-e6d492c2d399","Type":"ContainerStarted","Data":"919c00de7ea42ff01f87c004c5182c050ad2a545de81087491149169dbbb83c0"} Jan 09 13:48:07 crc kubenswrapper[4919]: I0109 13:48:07.924661 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-kdzww" event={"ID":"b76bf527-b2ca-4359-90f2-b9fdf5767d66","Type":"ContainerStarted","Data":"e20a15d6255525614d6c8d20914fa2414d8b8e16b46c6ac3d712f4bfa8fc73d3"} Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.324385 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 13:48:08 crc kubenswrapper[4919]: W0109 13:48:08.369881 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba39e0c2_1804_45a7_9dd1_2c20f229b648.slice/crio-c4fad242ec9236cc0d7bbe0a9099a40b56ebf8f3b4bd792ab22edbde926aa7db WatchSource:0}: Error finding container c4fad242ec9236cc0d7bbe0a9099a40b56ebf8f3b4bd792ab22edbde926aa7db: Status 404 returned error can't find the container with id c4fad242ec9236cc0d7bbe0a9099a40b56ebf8f3b4bd792ab22edbde926aa7db Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.402712 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.404079 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.408165 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.408549 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.409101 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-l64l9" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.409548 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.410114 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.416074 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.597828 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh269\" (UniqueName: \"kubernetes.io/projected/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-kube-api-access-nh269\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.597884 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.597909 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.598033 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.598152 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-config-data-default\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.598277 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-kolla-config\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.598320 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.598352 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.716723 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.716775 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.716811 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.716912 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-config-data-default\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.717054 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-kolla-config\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.717068 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.718656 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-kolla-config\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.719322 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-config-data-default\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.719418 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.719760 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.719832 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.719882 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh269\" (UniqueName: \"kubernetes.io/projected/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-kube-api-access-nh269\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.719934 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.724472 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.725372 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.739958 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh269\" (UniqueName: \"kubernetes.io/projected/3d0c2080-b1ea-4ff9-ad51-d970cce81d56-kube-api-access-nh269\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.743484 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"3d0c2080-b1ea-4ff9-ad51-d970cce81d56\") " pod="openstack/openstack-galera-0" Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.794695 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.950609 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba39e0c2-1804-45a7-9dd1-2c20f229b648","Type":"ContainerStarted","Data":"c4fad242ec9236cc0d7bbe0a9099a40b56ebf8f3b4bd792ab22edbde926aa7db"} Jan 09 13:48:08 crc kubenswrapper[4919]: I0109 13:48:08.953518 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b80a84d-c869-407b-b3d2-3be828183ae5","Type":"ContainerStarted","Data":"63c1a2eeb29f6e95d9d9933071e3a87e6fe6930fadce38bd36cf06c4c27b1fc4"} Jan 09 13:48:09 crc kubenswrapper[4919]: I0109 13:48:09.034396 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 09 13:48:09 crc kubenswrapper[4919]: I0109 13:48:09.925469 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 09 13:48:09 crc kubenswrapper[4919]: W0109 13:48:09.976164 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d0c2080_b1ea_4ff9_ad51_d970cce81d56.slice/crio-503dd21809827770f405e574a445d296e55dc31edda002096104ca2fb555e0ec WatchSource:0}: Error finding container 503dd21809827770f405e574a445d296e55dc31edda002096104ca2fb555e0ec: Status 404 returned error can't find the container with id 503dd21809827770f405e574a445d296e55dc31edda002096104ca2fb555e0ec Jan 09 13:48:09 crc kubenswrapper[4919]: I0109 13:48:09.991793 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 13:48:09 crc kubenswrapper[4919]: I0109 13:48:09.993355 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.004322 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.004593 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.004714 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-q6v6v" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.005310 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.014761 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.070758 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a078e997-b08e-44a9-89a7-bf2fe9eaed11-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.070823 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a078e997-b08e-44a9-89a7-bf2fe9eaed11-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.070868 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a078e997-b08e-44a9-89a7-bf2fe9eaed11-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.070988 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c846\" (UniqueName: \"kubernetes.io/projected/a078e997-b08e-44a9-89a7-bf2fe9eaed11-kube-api-access-7c846\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.071038 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a078e997-b08e-44a9-89a7-bf2fe9eaed11-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.071102 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.071183 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a078e997-b08e-44a9-89a7-bf2fe9eaed11-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.071254 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a078e997-b08e-44a9-89a7-bf2fe9eaed11-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.191817 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a078e997-b08e-44a9-89a7-bf2fe9eaed11-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.191885 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c846\" (UniqueName: \"kubernetes.io/projected/a078e997-b08e-44a9-89a7-bf2fe9eaed11-kube-api-access-7c846\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.191910 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a078e997-b08e-44a9-89a7-bf2fe9eaed11-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.191961 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.191995 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a078e997-b08e-44a9-89a7-bf2fe9eaed11-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.192042 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a078e997-b08e-44a9-89a7-bf2fe9eaed11-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.192113 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a078e997-b08e-44a9-89a7-bf2fe9eaed11-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.192196 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a078e997-b08e-44a9-89a7-bf2fe9eaed11-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.193245 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.194900 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a078e997-b08e-44a9-89a7-bf2fe9eaed11-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.195163 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a078e997-b08e-44a9-89a7-bf2fe9eaed11-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.196246 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a078e997-b08e-44a9-89a7-bf2fe9eaed11-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.195239 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a078e997-b08e-44a9-89a7-bf2fe9eaed11-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.232990 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a078e997-b08e-44a9-89a7-bf2fe9eaed11-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.237005 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.238249 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.244927 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.245203 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.248489 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a078e997-b08e-44a9-89a7-bf2fe9eaed11-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.255476 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.256614 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-bwjw8" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.262145 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c846\" (UniqueName: \"kubernetes.io/projected/a078e997-b08e-44a9-89a7-bf2fe9eaed11-kube-api-access-7c846\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.293469 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rncg\" (UniqueName: \"kubernetes.io/projected/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-kube-api-access-9rncg\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.293551 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-config-data\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.293574 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.293633 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.293672 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-kolla-config\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.298426 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a078e997-b08e-44a9-89a7-bf2fe9eaed11\") " pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.337746 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.395477 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.395539 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-kolla-config\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.395584 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rncg\" (UniqueName: \"kubernetes.io/projected/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-kube-api-access-9rncg\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.395633 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-config-data\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.395667 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.397868 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-kolla-config\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.399267 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-config-data\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.423992 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rncg\" (UniqueName: \"kubernetes.io/projected/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-kube-api-access-9rncg\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.431802 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.440940 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8\") " pod="openstack/memcached-0" Jan 09 13:48:10 crc kubenswrapper[4919]: I0109 13:48:10.667221 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 09 13:48:11 crc kubenswrapper[4919]: I0109 13:48:11.005152 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3d0c2080-b1ea-4ff9-ad51-d970cce81d56","Type":"ContainerStarted","Data":"503dd21809827770f405e574a445d296e55dc31edda002096104ca2fb555e0ec"} Jan 09 13:48:11 crc kubenswrapper[4919]: I0109 13:48:11.303277 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 13:48:11 crc kubenswrapper[4919]: I0109 13:48:11.885873 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 09 13:48:12 crc kubenswrapper[4919]: I0109 13:48:12.080643 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8","Type":"ContainerStarted","Data":"2a53c86e01952fcaebd91a29120d8f721b76671ceef1aadbe555b9d4090f140f"} Jan 09 13:48:12 crc kubenswrapper[4919]: I0109 13:48:12.087769 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a078e997-b08e-44a9-89a7-bf2fe9eaed11","Type":"ContainerStarted","Data":"a02a5c3a101f0c4dc469d89a40b47c5388f6bda23897f576e14fadb39eb11511"} Jan 09 13:48:12 crc kubenswrapper[4919]: I0109 13:48:12.237326 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 13:48:12 crc kubenswrapper[4919]: I0109 13:48:12.238575 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 13:48:12 crc kubenswrapper[4919]: I0109 13:48:12.243332 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wt7l5" Jan 09 13:48:12 crc kubenswrapper[4919]: I0109 13:48:12.245924 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 13:48:12 crc kubenswrapper[4919]: I0109 13:48:12.340987 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzqgw\" (UniqueName: \"kubernetes.io/projected/6bf2dcbc-c28e-4fd3-81d7-f766225e964d-kube-api-access-jzqgw\") pod \"kube-state-metrics-0\" (UID: \"6bf2dcbc-c28e-4fd3-81d7-f766225e964d\") " pod="openstack/kube-state-metrics-0" Jan 09 13:48:12 crc kubenswrapper[4919]: I0109 13:48:12.441970 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzqgw\" (UniqueName: \"kubernetes.io/projected/6bf2dcbc-c28e-4fd3-81d7-f766225e964d-kube-api-access-jzqgw\") pod \"kube-state-metrics-0\" (UID: \"6bf2dcbc-c28e-4fd3-81d7-f766225e964d\") " pod="openstack/kube-state-metrics-0" Jan 09 13:48:12 crc kubenswrapper[4919]: I0109 13:48:12.481122 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzqgw\" (UniqueName: \"kubernetes.io/projected/6bf2dcbc-c28e-4fd3-81d7-f766225e964d-kube-api-access-jzqgw\") pod \"kube-state-metrics-0\" (UID: \"6bf2dcbc-c28e-4fd3-81d7-f766225e964d\") " pod="openstack/kube-state-metrics-0" Jan 09 13:48:12 crc kubenswrapper[4919]: I0109 13:48:12.566881 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.762066 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n9g6d"] Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.765404 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.768507 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pmf24" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.768686 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.772490 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.784255 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rrsng"] Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.786624 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.800534 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n9g6d"] Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.812671 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rrsng"] Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.841986 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088a3f18-0aab-4042-b674-752c23ed3ac3-combined-ca-bundle\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842090 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/088a3f18-0aab-4042-b674-752c23ed3ac3-var-log-ovn\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842132 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/088a3f18-0aab-4042-b674-752c23ed3ac3-scripts\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842147 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbm6x\" (UniqueName: \"kubernetes.io/projected/088a3f18-0aab-4042-b674-752c23ed3ac3-kube-api-access-pbm6x\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842180 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-etc-ovs\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842200 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91789be0-3c6f-46d6-a222-d75d49e63662-scripts\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842241 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6tbc\" (UniqueName: \"kubernetes.io/projected/91789be0-3c6f-46d6-a222-d75d49e63662-kube-api-access-j6tbc\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842260 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-var-run\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842281 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-var-lib\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842296 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/088a3f18-0aab-4042-b674-752c23ed3ac3-var-run-ovn\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842323 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-var-log\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842351 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/088a3f18-0aab-4042-b674-752c23ed3ac3-ovn-controller-tls-certs\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.842389 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/088a3f18-0aab-4042-b674-752c23ed3ac3-var-run\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943275 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-var-run\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943316 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-var-lib\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943337 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/088a3f18-0aab-4042-b674-752c23ed3ac3-var-run-ovn\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943362 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-var-log\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943402 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/088a3f18-0aab-4042-b674-752c23ed3ac3-ovn-controller-tls-certs\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943426 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/088a3f18-0aab-4042-b674-752c23ed3ac3-var-run\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943446 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088a3f18-0aab-4042-b674-752c23ed3ac3-combined-ca-bundle\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943488 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/088a3f18-0aab-4042-b674-752c23ed3ac3-var-log-ovn\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943519 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/088a3f18-0aab-4042-b674-752c23ed3ac3-scripts\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943533 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbm6x\" (UniqueName: \"kubernetes.io/projected/088a3f18-0aab-4042-b674-752c23ed3ac3-kube-api-access-pbm6x\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943549 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-etc-ovs\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943565 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91789be0-3c6f-46d6-a222-d75d49e63662-scripts\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943595 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6tbc\" (UniqueName: \"kubernetes.io/projected/91789be0-3c6f-46d6-a222-d75d49e63662-kube-api-access-j6tbc\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943854 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-var-lib\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.943941 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-var-run\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.944017 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/088a3f18-0aab-4042-b674-752c23ed3ac3-var-run-ovn\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.944107 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-var-log\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.944245 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/91789be0-3c6f-46d6-a222-d75d49e63662-etc-ovs\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.944020 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/088a3f18-0aab-4042-b674-752c23ed3ac3-var-log-ovn\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.944561 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/088a3f18-0aab-4042-b674-752c23ed3ac3-var-run\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.946410 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/088a3f18-0aab-4042-b674-752c23ed3ac3-scripts\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.946568 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91789be0-3c6f-46d6-a222-d75d49e63662-scripts\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.951891 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/088a3f18-0aab-4042-b674-752c23ed3ac3-ovn-controller-tls-certs\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.955166 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088a3f18-0aab-4042-b674-752c23ed3ac3-combined-ca-bundle\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.966298 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbm6x\" (UniqueName: \"kubernetes.io/projected/088a3f18-0aab-4042-b674-752c23ed3ac3-kube-api-access-pbm6x\") pod \"ovn-controller-n9g6d\" (UID: \"088a3f18-0aab-4042-b674-752c23ed3ac3\") " pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:15 crc kubenswrapper[4919]: I0109 13:48:15.970343 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6tbc\" (UniqueName: \"kubernetes.io/projected/91789be0-3c6f-46d6-a222-d75d49e63662-kube-api-access-j6tbc\") pod \"ovn-controller-ovs-rrsng\" (UID: \"91789be0-3c6f-46d6-a222-d75d49e63662\") " pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.099093 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.106518 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.624115 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.625499 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.636503 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.636673 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.636749 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.638360 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.644965 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.645500 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-lqxcq" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.773794 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80e0f01c-3e7c-456d-ae74-276ef085ff36-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.773883 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.773918 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e0f01c-3e7c-456d-ae74-276ef085ff36-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.773939 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e0f01c-3e7c-456d-ae74-276ef085ff36-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.773962 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80e0f01c-3e7c-456d-ae74-276ef085ff36-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.773984 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80e0f01c-3e7c-456d-ae74-276ef085ff36-config\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.774013 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp264\" (UniqueName: \"kubernetes.io/projected/80e0f01c-3e7c-456d-ae74-276ef085ff36-kube-api-access-mp264\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.774047 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e0f01c-3e7c-456d-ae74-276ef085ff36-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.875666 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.875715 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e0f01c-3e7c-456d-ae74-276ef085ff36-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.875746 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e0f01c-3e7c-456d-ae74-276ef085ff36-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.875780 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80e0f01c-3e7c-456d-ae74-276ef085ff36-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.875875 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80e0f01c-3e7c-456d-ae74-276ef085ff36-config\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.875926 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp264\" (UniqueName: \"kubernetes.io/projected/80e0f01c-3e7c-456d-ae74-276ef085ff36-kube-api-access-mp264\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.876002 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e0f01c-3e7c-456d-ae74-276ef085ff36-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.876029 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.876602 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80e0f01c-3e7c-456d-ae74-276ef085ff36-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.876843 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80e0f01c-3e7c-456d-ae74-276ef085ff36-config\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.877023 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80e0f01c-3e7c-456d-ae74-276ef085ff36-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.878075 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80e0f01c-3e7c-456d-ae74-276ef085ff36-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.886977 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e0f01c-3e7c-456d-ae74-276ef085ff36-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.887836 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e0f01c-3e7c-456d-ae74-276ef085ff36-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.893974 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e0f01c-3e7c-456d-ae74-276ef085ff36-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.897129 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp264\" (UniqueName: \"kubernetes.io/projected/80e0f01c-3e7c-456d-ae74-276ef085ff36-kube-api-access-mp264\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.917496 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"80e0f01c-3e7c-456d-ae74-276ef085ff36\") " pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:16 crc kubenswrapper[4919]: I0109 13:48:16.991239 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.063522 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.065409 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.068080 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.069133 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.069472 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-22mxj" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.069752 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.072221 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.229585 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62681dab-a75d-4270-bb2f-c8f963838172-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.229681 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62681dab-a75d-4270-bb2f-c8f963838172-config\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.229730 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnsqj\" (UniqueName: \"kubernetes.io/projected/62681dab-a75d-4270-bb2f-c8f963838172-kube-api-access-pnsqj\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.229766 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62681dab-a75d-4270-bb2f-c8f963838172-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.229814 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62681dab-a75d-4270-bb2f-c8f963838172-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.229913 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.229983 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62681dab-a75d-4270-bb2f-c8f963838172-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.230035 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62681dab-a75d-4270-bb2f-c8f963838172-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.331274 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62681dab-a75d-4270-bb2f-c8f963838172-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.331359 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62681dab-a75d-4270-bb2f-c8f963838172-config\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.331386 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnsqj\" (UniqueName: \"kubernetes.io/projected/62681dab-a75d-4270-bb2f-c8f963838172-kube-api-access-pnsqj\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.331419 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62681dab-a75d-4270-bb2f-c8f963838172-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.331445 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62681dab-a75d-4270-bb2f-c8f963838172-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.331474 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.331513 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62681dab-a75d-4270-bb2f-c8f963838172-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.331550 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62681dab-a75d-4270-bb2f-c8f963838172-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.332252 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.332273 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62681dab-a75d-4270-bb2f-c8f963838172-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.334763 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62681dab-a75d-4270-bb2f-c8f963838172-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.336755 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62681dab-a75d-4270-bb2f-c8f963838172-config\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.341572 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62681dab-a75d-4270-bb2f-c8f963838172-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.352171 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62681dab-a75d-4270-bb2f-c8f963838172-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.354540 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnsqj\" (UniqueName: \"kubernetes.io/projected/62681dab-a75d-4270-bb2f-c8f963838172-kube-api-access-pnsqj\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.359255 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.359462 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62681dab-a75d-4270-bb2f-c8f963838172-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"62681dab-a75d-4270-bb2f-c8f963838172\") " pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:19 crc kubenswrapper[4919]: I0109 13:48:19.393328 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:29 crc kubenswrapper[4919]: E0109 13:48:29.540875 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 09 13:48:29 crc kubenswrapper[4919]: E0109 13:48:29.542501 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xh5gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(ba39e0c2-1804-45a7-9dd1-2c20f229b648): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:48:29 crc kubenswrapper[4919]: E0109 13:48:29.543726 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" Jan 09 13:48:29 crc kubenswrapper[4919]: E0109 13:48:29.969797 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.348947 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.349708 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkl8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-lrsz7_openstack(6cc1a82c-965c-4dce-b771-f77983e88d20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.351094 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" podUID="6cc1a82c-965c-4dce-b771-f77983e88d20" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.388132 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.388304 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7rqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-vf9kv_openstack(3e5fc04e-7f35-4d71-a257-e6d492c2d399): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.389848 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" podUID="3e5fc04e-7f35-4d71-a257-e6d492c2d399" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.394330 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.394547 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk9kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-kdzww_openstack(b76bf527-b2ca-4359-90f2-b9fdf5767d66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.396273 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-kdzww" podUID="b76bf527-b2ca-4359-90f2-b9fdf5767d66" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.408277 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.408398 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjhz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-2chl6_openstack(ca1399ee-0254-4313-83d7-bb6cdaef3dd7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:48:40 crc kubenswrapper[4919]: E0109 13:48:40.409776 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" podUID="ca1399ee-0254-4313-83d7-bb6cdaef3dd7" Jan 09 13:48:40 crc kubenswrapper[4919]: I0109 13:48:40.901825 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 13:48:40 crc kubenswrapper[4919]: W0109 13:48:40.906423 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bf2dcbc_c28e_4fd3_81d7_f766225e964d.slice/crio-d21db00e71c00bffbd35d0e7c99cfcd481054d94e6c8ca4525a8543d191563c0 WatchSource:0}: Error finding container d21db00e71c00bffbd35d0e7c99cfcd481054d94e6c8ca4525a8543d191563c0: Status 404 returned error can't find the container with id d21db00e71c00bffbd35d0e7c99cfcd481054d94e6c8ca4525a8543d191563c0 Jan 09 13:48:40 crc kubenswrapper[4919]: I0109 13:48:40.968730 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.047272 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"80e0f01c-3e7c-456d-ae74-276ef085ff36","Type":"ContainerStarted","Data":"6a6d23d389fbb7d2067a14f32b902a1efc7a8649a0b37e47ce404036070cfd03"} Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.048814 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a078e997-b08e-44a9-89a7-bf2fe9eaed11","Type":"ContainerStarted","Data":"03871b7a074eeebfaaeca01cc8f35515351f7b54197e107c822cd34bc19ab80e"} Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.050765 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6bf2dcbc-c28e-4fd3-81d7-f766225e964d","Type":"ContainerStarted","Data":"d21db00e71c00bffbd35d0e7c99cfcd481054d94e6c8ca4525a8543d191563c0"} Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.058298 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3d0c2080-b1ea-4ff9-ad51-d970cce81d56","Type":"ContainerStarted","Data":"7afb9ee7c565f91d68285725812ee6fb698291ad97b6ed276121ea72babc0aff"} Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.060862 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8","Type":"ContainerStarted","Data":"797265e79d7483a3e2ea40aa8e4ddad3eae217476082f999b099712951dc5770"} Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.061032 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 09 13:48:41 crc kubenswrapper[4919]: E0109 13:48:41.061468 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-kdzww" podUID="b76bf527-b2ca-4359-90f2-b9fdf5767d66" Jan 09 13:48:41 crc kubenswrapper[4919]: E0109 13:48:41.061797 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" podUID="3e5fc04e-7f35-4d71-a257-e6d492c2d399" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.066128 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.077955 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n9g6d"] Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.162451 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.578038356 podStartE2EDuration="31.162433008s" podCreationTimestamp="2026-01-09 13:48:10 +0000 UTC" firstStartedPulling="2026-01-09 13:48:11.959117747 +0000 UTC m=+1071.506957197" lastFinishedPulling="2026-01-09 13:48:39.543512399 +0000 UTC m=+1099.091351849" observedRunningTime="2026-01-09 13:48:41.14586811 +0000 UTC m=+1100.693707560" watchObservedRunningTime="2026-01-09 13:48:41.162433008 +0000 UTC m=+1100.710272458" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.227191 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rrsng"] Jan 09 13:48:41 crc kubenswrapper[4919]: W0109 13:48:41.236169 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91789be0_3c6f_46d6_a222_d75d49e63662.slice/crio-33f60638695cfc0f1b6b086ce16d07e52360ad89489f9a87a29548e95e5e45ef WatchSource:0}: Error finding container 33f60638695cfc0f1b6b086ce16d07e52360ad89489f9a87a29548e95e5e45ef: Status 404 returned error can't find the container with id 33f60638695cfc0f1b6b086ce16d07e52360ad89489f9a87a29548e95e5e45ef Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.503779 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.662049 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjhz5\" (UniqueName: \"kubernetes.io/projected/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-kube-api-access-pjhz5\") pod \"ca1399ee-0254-4313-83d7-bb6cdaef3dd7\" (UID: \"ca1399ee-0254-4313-83d7-bb6cdaef3dd7\") " Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.662190 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-config\") pod \"ca1399ee-0254-4313-83d7-bb6cdaef3dd7\" (UID: \"ca1399ee-0254-4313-83d7-bb6cdaef3dd7\") " Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.662980 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-config" (OuterVolumeSpecName: "config") pod "ca1399ee-0254-4313-83d7-bb6cdaef3dd7" (UID: "ca1399ee-0254-4313-83d7-bb6cdaef3dd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.668996 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-kube-api-access-pjhz5" (OuterVolumeSpecName: "kube-api-access-pjhz5") pod "ca1399ee-0254-4313-83d7-bb6cdaef3dd7" (UID: "ca1399ee-0254-4313-83d7-bb6cdaef3dd7"). InnerVolumeSpecName "kube-api-access-pjhz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.669626 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.764055 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkl8r\" (UniqueName: \"kubernetes.io/projected/6cc1a82c-965c-4dce-b771-f77983e88d20-kube-api-access-hkl8r\") pod \"6cc1a82c-965c-4dce-b771-f77983e88d20\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.764160 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-config\") pod \"6cc1a82c-965c-4dce-b771-f77983e88d20\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.764387 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-dns-svc\") pod \"6cc1a82c-965c-4dce-b771-f77983e88d20\" (UID: \"6cc1a82c-965c-4dce-b771-f77983e88d20\") " Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.764563 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-config" (OuterVolumeSpecName: "config") pod "6cc1a82c-965c-4dce-b771-f77983e88d20" (UID: "6cc1a82c-965c-4dce-b771-f77983e88d20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.764882 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjhz5\" (UniqueName: \"kubernetes.io/projected/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-kube-api-access-pjhz5\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.764905 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.764918 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca1399ee-0254-4313-83d7-bb6cdaef3dd7-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.764957 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cc1a82c-965c-4dce-b771-f77983e88d20" (UID: "6cc1a82c-965c-4dce-b771-f77983e88d20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.767062 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc1a82c-965c-4dce-b771-f77983e88d20-kube-api-access-hkl8r" (OuterVolumeSpecName: "kube-api-access-hkl8r") pod "6cc1a82c-965c-4dce-b771-f77983e88d20" (UID: "6cc1a82c-965c-4dce-b771-f77983e88d20"). InnerVolumeSpecName "kube-api-access-hkl8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.867066 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc1a82c-965c-4dce-b771-f77983e88d20-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:41 crc kubenswrapper[4919]: I0109 13:48:41.867379 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkl8r\" (UniqueName: \"kubernetes.io/projected/6cc1a82c-965c-4dce-b771-f77983e88d20-kube-api-access-hkl8r\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.069659 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b80a84d-c869-407b-b3d2-3be828183ae5","Type":"ContainerStarted","Data":"94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50"} Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.072528 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.072560 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-2chl6" event={"ID":"ca1399ee-0254-4313-83d7-bb6cdaef3dd7","Type":"ContainerDied","Data":"3e45b7b0bdb7fd190a92af37ecfc149db975dbaf047da1541f9fff4ce5bd8ccf"} Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.075070 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" event={"ID":"6cc1a82c-965c-4dce-b771-f77983e88d20","Type":"ContainerDied","Data":"91f11f4cf168e85864357a13800e1326b42d468057df79deef7bde41d98f79a3"} Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.075167 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-lrsz7" Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.090329 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9g6d" event={"ID":"088a3f18-0aab-4042-b674-752c23ed3ac3","Type":"ContainerStarted","Data":"6c6a0fd026f125491714d634258379bcd5f76d2e854d0f0e265348bc2403c0b1"} Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.092065 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rrsng" event={"ID":"91789be0-3c6f-46d6-a222-d75d49e63662","Type":"ContainerStarted","Data":"33f60638695cfc0f1b6b086ce16d07e52360ad89489f9a87a29548e95e5e45ef"} Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.102935 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"62681dab-a75d-4270-bb2f-c8f963838172","Type":"ContainerStarted","Data":"97f7f859a5fb1ad27b22e46514a9844701c41cc34b77b66b904761e435dbeb44"} Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.135600 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-2chl6"] Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.178710 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-2chl6"] Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.195037 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lrsz7"] Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.204586 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-lrsz7"] Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.760386 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc1a82c-965c-4dce-b771-f77983e88d20" path="/var/lib/kubelet/pods/6cc1a82c-965c-4dce-b771-f77983e88d20/volumes" Jan 09 13:48:42 crc kubenswrapper[4919]: I0109 13:48:42.760775 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca1399ee-0254-4313-83d7-bb6cdaef3dd7" path="/var/lib/kubelet/pods/ca1399ee-0254-4313-83d7-bb6cdaef3dd7/volumes" Jan 09 13:48:45 crc kubenswrapper[4919]: I0109 13:48:45.128122 4919 generic.go:334] "Generic (PLEG): container finished" podID="3d0c2080-b1ea-4ff9-ad51-d970cce81d56" containerID="7afb9ee7c565f91d68285725812ee6fb698291ad97b6ed276121ea72babc0aff" exitCode=0 Jan 09 13:48:45 crc kubenswrapper[4919]: I0109 13:48:45.128223 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3d0c2080-b1ea-4ff9-ad51-d970cce81d56","Type":"ContainerDied","Data":"7afb9ee7c565f91d68285725812ee6fb698291ad97b6ed276121ea72babc0aff"} Jan 09 13:48:45 crc kubenswrapper[4919]: I0109 13:48:45.134036 4919 generic.go:334] "Generic (PLEG): container finished" podID="a078e997-b08e-44a9-89a7-bf2fe9eaed11" containerID="03871b7a074eeebfaaeca01cc8f35515351f7b54197e107c822cd34bc19ab80e" exitCode=0 Jan 09 13:48:45 crc kubenswrapper[4919]: I0109 13:48:45.134087 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a078e997-b08e-44a9-89a7-bf2fe9eaed11","Type":"ContainerDied","Data":"03871b7a074eeebfaaeca01cc8f35515351f7b54197e107c822cd34bc19ab80e"} Jan 09 13:48:45 crc kubenswrapper[4919]: I0109 13:48:45.669154 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 09 13:48:46 crc kubenswrapper[4919]: I0109 13:48:46.141395 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9g6d" event={"ID":"088a3f18-0aab-4042-b674-752c23ed3ac3","Type":"ContainerStarted","Data":"5081944abfa16b687fe810ef9c191cc4b67053a21e101a73c9775d44c15057d3"} Jan 09 13:48:46 crc kubenswrapper[4919]: I0109 13:48:46.141979 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-n9g6d" Jan 09 13:48:46 crc kubenswrapper[4919]: I0109 13:48:46.144657 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rrsng" event={"ID":"91789be0-3c6f-46d6-a222-d75d49e63662","Type":"ContainerStarted","Data":"76f122a65aaae834e0c6a89da429bd12a79557fa22fc167f2a6746ff1d983822"} Jan 09 13:48:46 crc kubenswrapper[4919]: I0109 13:48:46.147085 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3d0c2080-b1ea-4ff9-ad51-d970cce81d56","Type":"ContainerStarted","Data":"b536d729f419bd9b62519f7f1f07b0c59de5ee9355e621a8b798f88d5681ea71"} Jan 09 13:48:46 crc kubenswrapper[4919]: I0109 13:48:46.149525 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"62681dab-a75d-4270-bb2f-c8f963838172","Type":"ContainerStarted","Data":"af8651536061f40fff692d9c4fd7ec3da8a5b433fed426a854c87be21d5eac3c"} Jan 09 13:48:46 crc kubenswrapper[4919]: I0109 13:48:46.151087 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"80e0f01c-3e7c-456d-ae74-276ef085ff36","Type":"ContainerStarted","Data":"566417066a916a0257fb8e0b25259955bd3045df23be96de89d5741a52839714"} Jan 09 13:48:46 crc kubenswrapper[4919]: I0109 13:48:46.153137 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a078e997-b08e-44a9-89a7-bf2fe9eaed11","Type":"ContainerStarted","Data":"3124aebe46892b2f769a2e28698b9bb4c4c83471b565beb879658c25841e1c4d"} Jan 09 13:48:46 crc kubenswrapper[4919]: I0109 13:48:46.166777 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n9g6d" podStartSLOduration=26.98971056 podStartE2EDuration="31.166754829s" podCreationTimestamp="2026-01-09 13:48:15 +0000 UTC" firstStartedPulling="2026-01-09 13:48:41.195238407 +0000 UTC m=+1100.743077857" lastFinishedPulling="2026-01-09 13:48:45.372282686 +0000 UTC m=+1104.920122126" observedRunningTime="2026-01-09 13:48:46.161577331 +0000 UTC m=+1105.709416781" watchObservedRunningTime="2026-01-09 13:48:46.166754829 +0000 UTC m=+1105.714594279" Jan 09 13:48:46 crc kubenswrapper[4919]: I0109 13:48:46.187012 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.355385127 podStartE2EDuration="38.186991722s" podCreationTimestamp="2026-01-09 13:48:08 +0000 UTC" firstStartedPulling="2026-01-09 13:48:11.465806714 +0000 UTC m=+1071.013646164" lastFinishedPulling="2026-01-09 13:48:40.297413309 +0000 UTC m=+1099.845252759" observedRunningTime="2026-01-09 13:48:46.184813988 +0000 UTC m=+1105.732653428" watchObservedRunningTime="2026-01-09 13:48:46.186991722 +0000 UTC m=+1105.734831172" Jan 09 13:48:46 crc kubenswrapper[4919]: I0109 13:48:46.230636 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.835658951 podStartE2EDuration="39.230621707s" podCreationTimestamp="2026-01-09 13:48:07 +0000 UTC" firstStartedPulling="2026-01-09 13:48:10.000376298 +0000 UTC m=+1069.548215758" lastFinishedPulling="2026-01-09 13:48:40.395339064 +0000 UTC m=+1099.943178514" observedRunningTime="2026-01-09 13:48:46.222035164 +0000 UTC m=+1105.769874614" watchObservedRunningTime="2026-01-09 13:48:46.230621707 +0000 UTC m=+1105.778461157" Jan 09 13:48:47 crc kubenswrapper[4919]: I0109 13:48:47.160764 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6bf2dcbc-c28e-4fd3-81d7-f766225e964d","Type":"ContainerStarted","Data":"4d950ab99da10547fc7ebc3ce465f153a0328a4db84918dec53a7f4c50456878"} Jan 09 13:48:47 crc kubenswrapper[4919]: I0109 13:48:47.161131 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 09 13:48:47 crc kubenswrapper[4919]: I0109 13:48:47.162662 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba39e0c2-1804-45a7-9dd1-2c20f229b648","Type":"ContainerStarted","Data":"589d5a36f7cf41ba69a03c03f167fb5b087bd8d2e6a305c6bf38d6413aeba7b7"} Jan 09 13:48:47 crc kubenswrapper[4919]: I0109 13:48:47.164894 4919 generic.go:334] "Generic (PLEG): container finished" podID="91789be0-3c6f-46d6-a222-d75d49e63662" containerID="76f122a65aaae834e0c6a89da429bd12a79557fa22fc167f2a6746ff1d983822" exitCode=0 Jan 09 13:48:47 crc kubenswrapper[4919]: I0109 13:48:47.165754 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rrsng" event={"ID":"91789be0-3c6f-46d6-a222-d75d49e63662","Type":"ContainerDied","Data":"76f122a65aaae834e0c6a89da429bd12a79557fa22fc167f2a6746ff1d983822"} Jan 09 13:48:47 crc kubenswrapper[4919]: I0109 13:48:47.188819 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=29.278973876 podStartE2EDuration="35.188798106s" podCreationTimestamp="2026-01-09 13:48:12 +0000 UTC" firstStartedPulling="2026-01-09 13:48:40.908360503 +0000 UTC m=+1100.456199953" lastFinishedPulling="2026-01-09 13:48:46.818184733 +0000 UTC m=+1106.366024183" observedRunningTime="2026-01-09 13:48:47.176190813 +0000 UTC m=+1106.724030283" watchObservedRunningTime="2026-01-09 13:48:47.188798106 +0000 UTC m=+1106.736637556" Jan 09 13:48:48 crc kubenswrapper[4919]: I0109 13:48:48.175306 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rrsng" event={"ID":"91789be0-3c6f-46d6-a222-d75d49e63662","Type":"ContainerStarted","Data":"c27f9627d5ed3421f95aeab4dec4fb7920a52cf1399a45270c6c87bb34408aa1"} Jan 09 13:48:48 crc kubenswrapper[4919]: I0109 13:48:48.176432 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:48 crc kubenswrapper[4919]: I0109 13:48:48.176557 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rrsng" event={"ID":"91789be0-3c6f-46d6-a222-d75d49e63662","Type":"ContainerStarted","Data":"93a53479f0f33a8b5eac760e98c305841e5905f97d417c72757127c8f5cb77c5"} Jan 09 13:48:48 crc kubenswrapper[4919]: I0109 13:48:48.176632 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:48:48 crc kubenswrapper[4919]: I0109 13:48:48.202093 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rrsng" podStartSLOduration=29.071094812 podStartE2EDuration="33.202070994s" podCreationTimestamp="2026-01-09 13:48:15 +0000 UTC" firstStartedPulling="2026-01-09 13:48:41.239459098 +0000 UTC m=+1100.787298548" lastFinishedPulling="2026-01-09 13:48:45.37043528 +0000 UTC m=+1104.918274730" observedRunningTime="2026-01-09 13:48:48.193817009 +0000 UTC m=+1107.741656469" watchObservedRunningTime="2026-01-09 13:48:48.202070994 +0000 UTC m=+1107.749910444" Jan 09 13:48:49 crc kubenswrapper[4919]: I0109 13:48:49.035356 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 09 13:48:49 crc kubenswrapper[4919]: I0109 13:48:49.035782 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 09 13:48:50 crc kubenswrapper[4919]: I0109 13:48:50.190265 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"80e0f01c-3e7c-456d-ae74-276ef085ff36","Type":"ContainerStarted","Data":"7b7abfefa0875c71b54c5a6591f2ed0e717c02a1b4ced55aa414e629153939a6"} Jan 09 13:48:50 crc kubenswrapper[4919]: I0109 13:48:50.192047 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"62681dab-a75d-4270-bb2f-c8f963838172","Type":"ContainerStarted","Data":"a44df373e9ef7a641ea29b4bc03089a5bbd1e10a187228e4f98382762778dd79"} Jan 09 13:48:50 crc kubenswrapper[4919]: I0109 13:48:50.212061 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=26.550860392 podStartE2EDuration="35.212043039s" podCreationTimestamp="2026-01-09 13:48:15 +0000 UTC" firstStartedPulling="2026-01-09 13:48:40.973599852 +0000 UTC m=+1100.521439302" lastFinishedPulling="2026-01-09 13:48:49.634782499 +0000 UTC m=+1109.182621949" observedRunningTime="2026-01-09 13:48:50.205108337 +0000 UTC m=+1109.752947807" watchObservedRunningTime="2026-01-09 13:48:50.212043039 +0000 UTC m=+1109.759882479" Jan 09 13:48:50 crc kubenswrapper[4919]: I0109 13:48:50.226654 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.775035871 podStartE2EDuration="32.226631902s" podCreationTimestamp="2026-01-09 13:48:18 +0000 UTC" firstStartedPulling="2026-01-09 13:48:41.194454858 +0000 UTC m=+1100.742294308" lastFinishedPulling="2026-01-09 13:48:49.646050899 +0000 UTC m=+1109.193890339" observedRunningTime="2026-01-09 13:48:50.22211676 +0000 UTC m=+1109.769956220" watchObservedRunningTime="2026-01-09 13:48:50.226631902 +0000 UTC m=+1109.774471362" Jan 09 13:48:50 crc kubenswrapper[4919]: I0109 13:48:50.338557 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:50 crc kubenswrapper[4919]: I0109 13:48:50.338612 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:51 crc kubenswrapper[4919]: I0109 13:48:51.330709 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 09 13:48:51 crc kubenswrapper[4919]: I0109 13:48:51.420660 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 09 13:48:51 crc kubenswrapper[4919]: I0109 13:48:51.992136 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.394073 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.449198 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.544303 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vf9kv"] Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.588926 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.593243 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-xxdrb"] Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.595033 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.631632 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-xxdrb"] Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.729433 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48rdm\" (UniqueName: \"kubernetes.io/projected/3dab9193-1d11-452c-87e8-371ffd717dde-kube-api-access-48rdm\") pod \"dnsmasq-dns-7f9f9f545f-xxdrb\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.729502 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-xxdrb\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.729576 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-config\") pod \"dnsmasq-dns-7f9f9f545f-xxdrb\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.831075 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48rdm\" (UniqueName: \"kubernetes.io/projected/3dab9193-1d11-452c-87e8-371ffd717dde-kube-api-access-48rdm\") pod \"dnsmasq-dns-7f9f9f545f-xxdrb\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.831142 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-xxdrb\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.831196 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-config\") pod \"dnsmasq-dns-7f9f9f545f-xxdrb\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.832133 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-config\") pod \"dnsmasq-dns-7f9f9f545f-xxdrb\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.833191 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-xxdrb\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:52 crc kubenswrapper[4919]: I0109 13:48:52.967627 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48rdm\" (UniqueName: \"kubernetes.io/projected/3dab9193-1d11-452c-87e8-371ffd717dde-kube-api-access-48rdm\") pod \"dnsmasq-dns-7f9f9f545f-xxdrb\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.086098 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.132268 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.215097 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" event={"ID":"3e5fc04e-7f35-4d71-a257-e6d492c2d399","Type":"ContainerDied","Data":"919c00de7ea42ff01f87c004c5182c050ad2a545de81087491149169dbbb83c0"} Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.215156 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="919c00de7ea42ff01f87c004c5182c050ad2a545de81087491149169dbbb83c0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.215657 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.247534 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.256313 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.263485 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.263856 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.388640 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-dns-svc\") pod \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.388995 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-config\") pod \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.389027 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7rqb\" (UniqueName: \"kubernetes.io/projected/3e5fc04e-7f35-4d71-a257-e6d492c2d399-kube-api-access-x7rqb\") pod \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\" (UID: \"3e5fc04e-7f35-4d71-a257-e6d492c2d399\") " Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.389743 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e5fc04e-7f35-4d71-a257-e6d492c2d399" (UID: "3e5fc04e-7f35-4d71-a257-e6d492c2d399"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.390589 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-config" (OuterVolumeSpecName: "config") pod "3e5fc04e-7f35-4d71-a257-e6d492c2d399" (UID: "3e5fc04e-7f35-4d71-a257-e6d492c2d399"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.437252 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e5fc04e-7f35-4d71-a257-e6d492c2d399-kube-api-access-x7rqb" (OuterVolumeSpecName: "kube-api-access-x7rqb") pod "3e5fc04e-7f35-4d71-a257-e6d492c2d399" (UID: "3e5fc04e-7f35-4d71-a257-e6d492c2d399"). InnerVolumeSpecName "kube-api-access-x7rqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.491232 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.491274 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e5fc04e-7f35-4d71-a257-e6d492c2d399-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.491287 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7rqb\" (UniqueName: \"kubernetes.io/projected/3e5fc04e-7f35-4d71-a257-e6d492c2d399-kube-api-access-x7rqb\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.527773 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.550228 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kdzww"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.599723 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-lhw5c"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.601046 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.605661 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.624265 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-fdp27"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.625336 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.631305 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.636172 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-lhw5c"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.650333 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fdp27"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.691407 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.694701 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-dns-svc\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.694734 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-config\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.694752 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-ovs-rundir\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.694825 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf7lw\" (UniqueName: \"kubernetes.io/projected/c5055335-3873-4f7a-87d3-bab319a9839c-kube-api-access-sf7lw\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.695284 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.695323 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb76k\" (UniqueName: \"kubernetes.io/projected/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-kube-api-access-vb76k\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.695373 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-config\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.695490 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-combined-ca-bundle\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.695547 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-ovn-rundir\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.695583 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.703018 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.706062 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.706448 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.706571 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.707737 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xdcq8" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.717976 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.746364 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.786265 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-xxdrb"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801079 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-config\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801140 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-combined-ca-bundle\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801170 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-ovn-rundir\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801194 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f55583f6-0518-4977-89a9-e4f12b0eae89-cache\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801226 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801266 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ds8k\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-kube-api-access-8ds8k\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801286 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-dns-svc\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801311 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-config\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801325 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-ovs-rundir\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801371 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf7lw\" (UniqueName: \"kubernetes.io/projected/c5055335-3873-4f7a-87d3-bab319a9839c-kube-api-access-sf7lw\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801397 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801413 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801428 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801457 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb76k\" (UniqueName: \"kubernetes.io/projected/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-kube-api-access-vb76k\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801472 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f55583f6-0518-4977-89a9-e4f12b0eae89-lock\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.801945 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-config\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.802544 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-config\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.802845 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-ovn-rundir\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.803248 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-ovs-rundir\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.803525 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.803830 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-dns-svc\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.809732 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.811803 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-combined-ca-bundle\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.819442 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.825404 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf7lw\" (UniqueName: \"kubernetes.io/projected/c5055335-3873-4f7a-87d3-bab319a9839c-kube-api-access-sf7lw\") pod \"dnsmasq-dns-7c554cfdf-lhw5c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.827060 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb76k\" (UniqueName: \"kubernetes.io/projected/9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6-kube-api-access-vb76k\") pod \"ovn-controller-metrics-fdp27\" (UID: \"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6\") " pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.867582 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ptnj7"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.885291 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.885981 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.887259 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.892843 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.893129 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.893340 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.893555 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.894549 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-hv7k5" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924012 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f55583f6-0518-4977-89a9-e4f12b0eae89-lock\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924059 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68449649-bcc2-41c2-9a6a-a91452a48282-config\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924091 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68449649-bcc2-41c2-9a6a-a91452a48282-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924127 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/68449649-bcc2-41c2-9a6a-a91452a48282-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924188 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68449649-bcc2-41c2-9a6a-a91452a48282-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924248 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f55583f6-0518-4977-89a9-e4f12b0eae89-cache\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924295 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69v7g\" (UniqueName: \"kubernetes.io/projected/68449649-bcc2-41c2-9a6a-a91452a48282-kube-api-access-69v7g\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924319 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ds8k\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-kube-api-access-8ds8k\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924377 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68449649-bcc2-41c2-9a6a-a91452a48282-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924419 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68449649-bcc2-41c2-9a6a-a91452a48282-scripts\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924454 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924478 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: E0109 13:48:53.924608 4919 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 13:48:53 crc kubenswrapper[4919]: E0109 13:48:53.924626 4919 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 13:48:53 crc kubenswrapper[4919]: E0109 13:48:53.924669 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift podName:f55583f6-0518-4977-89a9-e4f12b0eae89 nodeName:}" failed. No retries permitted until 2026-01-09 13:48:54.424652189 +0000 UTC m=+1113.972491639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift") pod "swift-storage-0" (UID: "f55583f6-0518-4977-89a9-e4f12b0eae89") : configmap "swift-ring-files" not found Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.924750 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f55583f6-0518-4977-89a9-e4f12b0eae89-lock\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.925177 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.925193 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f55583f6-0518-4977-89a9-e4f12b0eae89-cache\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.945140 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ptnj7"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.945897 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.960081 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ds8k\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-kube-api-access-8ds8k\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.961800 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fdp27" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.967509 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.984524 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-xxdrb"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.988796 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wb448"] Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.990055 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.992199 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 09 13:48:53 crc kubenswrapper[4919]: I0109 13:48:53.993009 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.005637 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wb448"] Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.018566 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029129 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69v7g\" (UniqueName: \"kubernetes.io/projected/68449649-bcc2-41c2-9a6a-a91452a48282-kube-api-access-69v7g\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029183 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/36275c2d-b4fd-42de-ba91-b067ec9299c7-etc-swift\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029223 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029243 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-config\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029279 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029294 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-dispersionconf\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029309 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-ring-data-devices\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029327 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g4tp\" (UniqueName: \"kubernetes.io/projected/34a3604c-a8d7-4927-af88-a99eef3393fd-kube-api-access-5g4tp\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029352 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68449649-bcc2-41c2-9a6a-a91452a48282-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029381 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68449649-bcc2-41c2-9a6a-a91452a48282-scripts\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029430 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68449649-bcc2-41c2-9a6a-a91452a48282-config\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029445 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68449649-bcc2-41c2-9a6a-a91452a48282-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029466 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029486 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/68449649-bcc2-41c2-9a6a-a91452a48282-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029511 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68449649-bcc2-41c2-9a6a-a91452a48282-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029527 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-combined-ca-bundle\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029542 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-scripts\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029556 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-swiftconf\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.029577 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g5n4\" (UniqueName: \"kubernetes.io/projected/36275c2d-b4fd-42de-ba91-b067ec9299c7-kube-api-access-4g5n4\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.030556 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68449649-bcc2-41c2-9a6a-a91452a48282-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.031151 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68449649-bcc2-41c2-9a6a-a91452a48282-scripts\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.031696 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68449649-bcc2-41c2-9a6a-a91452a48282-config\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.039491 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68449649-bcc2-41c2-9a6a-a91452a48282-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.042882 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/68449649-bcc2-41c2-9a6a-a91452a48282-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.042965 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68449649-bcc2-41c2-9a6a-a91452a48282-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.059038 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69v7g\" (UniqueName: \"kubernetes.io/projected/68449649-bcc2-41c2-9a6a-a91452a48282-kube-api-access-69v7g\") pod \"ovn-northd-0\" (UID: \"68449649-bcc2-41c2-9a6a-a91452a48282\") " pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.060550 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.073433 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7lmg7"] Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.074855 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.103265 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-wb448"] Jan 09 13:48:54 crc kubenswrapper[4919]: E0109 13:48:54.105319 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-4g5n4 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-wb448" podUID="36275c2d-b4fd-42de-ba91-b067ec9299c7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132252 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g4tp\" (UniqueName: \"kubernetes.io/projected/34a3604c-a8d7-4927-af88-a99eef3393fd-kube-api-access-5g4tp\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132409 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-combined-ca-bundle\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132474 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132492 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-scripts\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132529 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8rtr\" (UniqueName: \"kubernetes.io/projected/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-kube-api-access-x8rtr\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132564 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-etc-swift\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132584 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-combined-ca-bundle\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132600 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-scripts\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132620 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-swiftconf\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132641 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g5n4\" (UniqueName: \"kubernetes.io/projected/36275c2d-b4fd-42de-ba91-b067ec9299c7-kube-api-access-4g5n4\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132678 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-ring-data-devices\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132705 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/36275c2d-b4fd-42de-ba91-b067ec9299c7-etc-swift\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132728 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132749 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-config\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132781 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-dispersionconf\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132799 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-swiftconf\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132820 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132837 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-dispersionconf\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.132855 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-ring-data-devices\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.133581 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-ring-data-devices\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.134188 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.142017 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-combined-ca-bundle\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.154465 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g4tp\" (UniqueName: \"kubernetes.io/projected/34a3604c-a8d7-4927-af88-a99eef3393fd-kube-api-access-5g4tp\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.169742 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g5n4\" (UniqueName: \"kubernetes.io/projected/36275c2d-b4fd-42de-ba91-b067ec9299c7-kube-api-access-4g5n4\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.174836 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-scripts\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.175120 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/36275c2d-b4fd-42de-ba91-b067ec9299c7-etc-swift\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.175421 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.175949 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.176882 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-config\") pod \"dnsmasq-dns-67fdf7998c-ptnj7\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.182629 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-dispersionconf\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.188947 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-swiftconf\") pod \"swift-ring-rebalance-wb448\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.192632 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7lmg7"] Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.237608 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-combined-ca-bundle\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.237685 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-scripts\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.237729 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8rtr\" (UniqueName: \"kubernetes.io/projected/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-kube-api-access-x8rtr\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.237754 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-etc-swift\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.237846 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-ring-data-devices\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.237889 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-dispersionconf\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.237903 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-swiftconf\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.244230 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-etc-swift\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.264532 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-combined-ca-bundle\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.265426 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-scripts\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.266692 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-ring-data-devices\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.267574 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-swiftconf\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.281191 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-dispersionconf\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.292037 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8rtr\" (UniqueName: \"kubernetes.io/projected/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-kube-api-access-x8rtr\") pod \"swift-ring-rebalance-7lmg7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.297271 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-vf9kv" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.300459 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.300455 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" event={"ID":"3dab9193-1d11-452c-87e8-371ffd717dde","Type":"ContainerStarted","Data":"1491a094d06d0cca6d77ae609df656878fd52905ddb1aab106119063c7c681b5"} Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.310806 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.331749 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.373808 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.376754 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vf9kv"] Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.403380 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-vf9kv"] Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.414282 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.458795 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-combined-ca-bundle\") pod \"36275c2d-b4fd-42de-ba91-b067ec9299c7\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.458901 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-dispersionconf\") pod \"36275c2d-b4fd-42de-ba91-b067ec9299c7\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.458932 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-dns-svc\") pod \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.458948 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-config\") pod \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.458992 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk9kg\" (UniqueName: \"kubernetes.io/projected/b76bf527-b2ca-4359-90f2-b9fdf5767d66-kube-api-access-pk9kg\") pod \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\" (UID: \"b76bf527-b2ca-4359-90f2-b9fdf5767d66\") " Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.459017 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-swiftconf\") pod \"36275c2d-b4fd-42de-ba91-b067ec9299c7\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.459105 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-ring-data-devices\") pod \"36275c2d-b4fd-42de-ba91-b067ec9299c7\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.459131 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g5n4\" (UniqueName: \"kubernetes.io/projected/36275c2d-b4fd-42de-ba91-b067ec9299c7-kube-api-access-4g5n4\") pod \"36275c2d-b4fd-42de-ba91-b067ec9299c7\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.459171 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-scripts\") pod \"36275c2d-b4fd-42de-ba91-b067ec9299c7\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.459364 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/36275c2d-b4fd-42de-ba91-b067ec9299c7-etc-swift\") pod \"36275c2d-b4fd-42de-ba91-b067ec9299c7\" (UID: \"36275c2d-b4fd-42de-ba91-b067ec9299c7\") " Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.459709 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-scripts" (OuterVolumeSpecName: "scripts") pod "36275c2d-b4fd-42de-ba91-b067ec9299c7" (UID: "36275c2d-b4fd-42de-ba91-b067ec9299c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.459722 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-config" (OuterVolumeSpecName: "config") pod "b76bf527-b2ca-4359-90f2-b9fdf5767d66" (UID: "b76bf527-b2ca-4359-90f2-b9fdf5767d66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.460334 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.460370 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "36275c2d-b4fd-42de-ba91-b067ec9299c7" (UID: "36275c2d-b4fd-42de-ba91-b067ec9299c7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:54 crc kubenswrapper[4919]: E0109 13:48:54.460489 4919 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 13:48:54 crc kubenswrapper[4919]: E0109 13:48:54.460507 4919 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 13:48:54 crc kubenswrapper[4919]: E0109 13:48:54.460558 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift podName:f55583f6-0518-4977-89a9-e4f12b0eae89 nodeName:}" failed. No retries permitted until 2026-01-09 13:48:55.46054097 +0000 UTC m=+1115.008380600 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift") pod "swift-storage-0" (UID: "f55583f6-0518-4977-89a9-e4f12b0eae89") : configmap "swift-ring-files" not found Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.460632 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.460665 4919 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.460680 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36275c2d-b4fd-42de-ba91-b067ec9299c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.461241 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b76bf527-b2ca-4359-90f2-b9fdf5767d66" (UID: "b76bf527-b2ca-4359-90f2-b9fdf5767d66"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.461322 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36275c2d-b4fd-42de-ba91-b067ec9299c7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "36275c2d-b4fd-42de-ba91-b067ec9299c7" (UID: "36275c2d-b4fd-42de-ba91-b067ec9299c7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.499483 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "36275c2d-b4fd-42de-ba91-b067ec9299c7" (UID: "36275c2d-b4fd-42de-ba91-b067ec9299c7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.500241 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36275c2d-b4fd-42de-ba91-b067ec9299c7" (UID: "36275c2d-b4fd-42de-ba91-b067ec9299c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.500298 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76bf527-b2ca-4359-90f2-b9fdf5767d66-kube-api-access-pk9kg" (OuterVolumeSpecName: "kube-api-access-pk9kg") pod "b76bf527-b2ca-4359-90f2-b9fdf5767d66" (UID: "b76bf527-b2ca-4359-90f2-b9fdf5767d66"). InnerVolumeSpecName "kube-api-access-pk9kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.500189 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "36275c2d-b4fd-42de-ba91-b067ec9299c7" (UID: "36275c2d-b4fd-42de-ba91-b067ec9299c7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.501205 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36275c2d-b4fd-42de-ba91-b067ec9299c7-kube-api-access-4g5n4" (OuterVolumeSpecName: "kube-api-access-4g5n4") pod "36275c2d-b4fd-42de-ba91-b067ec9299c7" (UID: "36275c2d-b4fd-42de-ba91-b067ec9299c7"). InnerVolumeSpecName "kube-api-access-4g5n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.561995 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g5n4\" (UniqueName: \"kubernetes.io/projected/36275c2d-b4fd-42de-ba91-b067ec9299c7-kube-api-access-4g5n4\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.562029 4919 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/36275c2d-b4fd-42de-ba91-b067ec9299c7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.562044 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.562056 4919 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.562067 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b76bf527-b2ca-4359-90f2-b9fdf5767d66-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.562109 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk9kg\" (UniqueName: \"kubernetes.io/projected/b76bf527-b2ca-4359-90f2-b9fdf5767d66-kube-api-access-pk9kg\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.562127 4919 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/36275c2d-b4fd-42de-ba91-b067ec9299c7-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.656600 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-lhw5c"] Jan 09 13:48:54 crc kubenswrapper[4919]: W0109 13:48:54.657492 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5055335_3873_4f7a_87d3_bab319a9839c.slice/crio-7d9e4e62b373dad2dddb647bea79732a603273d97dab1b6298001998fe01614d WatchSource:0}: Error finding container 7d9e4e62b373dad2dddb647bea79732a603273d97dab1b6298001998fe01614d: Status 404 returned error can't find the container with id 7d9e4e62b373dad2dddb647bea79732a603273d97dab1b6298001998fe01614d Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.773880 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e5fc04e-7f35-4d71-a257-e6d492c2d399" path="/var/lib/kubelet/pods/3e5fc04e-7f35-4d71-a257-e6d492c2d399/volumes" Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.801308 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fdp27"] Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.860705 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 09 13:48:54 crc kubenswrapper[4919]: W0109 13:48:54.879804 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68449649_bcc2_41c2_9a6a_a91452a48282.slice/crio-c9c667d219844555d6bce9968ae58455c2f6f3595aaa118d5c2e40588beea25b WatchSource:0}: Error finding container c9c667d219844555d6bce9968ae58455c2f6f3595aaa118d5c2e40588beea25b: Status 404 returned error can't find the container with id c9c667d219844555d6bce9968ae58455c2f6f3595aaa118d5c2e40588beea25b Jan 09 13:48:54 crc kubenswrapper[4919]: I0109 13:48:54.982415 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ptnj7"] Jan 09 13:48:54 crc kubenswrapper[4919]: W0109 13:48:54.984519 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34a3604c_a8d7_4927_af88_a99eef3393fd.slice/crio-7e0d520e1c06fb046e85e26addd85d4194290f857f06ccdcd0bff81d46f14ad4 WatchSource:0}: Error finding container 7e0d520e1c06fb046e85e26addd85d4194290f857f06ccdcd0bff81d46f14ad4: Status 404 returned error can't find the container with id 7e0d520e1c06fb046e85e26addd85d4194290f857f06ccdcd0bff81d46f14ad4 Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.071831 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7lmg7"] Jan 09 13:48:55 crc kubenswrapper[4919]: W0109 13:48:55.118784 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5cc6e72_8cde_4ad1_bab5_0e3c20c11cb7.slice/crio-0934ed7279ac2bd8a4fb81ec259cee963324cde4b909fd629666874a8747e39b WatchSource:0}: Error finding container 0934ed7279ac2bd8a4fb81ec259cee963324cde4b909fd629666874a8747e39b: Status 404 returned error can't find the container with id 0934ed7279ac2bd8a4fb81ec259cee963324cde4b909fd629666874a8747e39b Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.304158 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7lmg7" event={"ID":"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7","Type":"ContainerStarted","Data":"0934ed7279ac2bd8a4fb81ec259cee963324cde4b909fd629666874a8747e39b"} Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.306105 4919 generic.go:334] "Generic (PLEG): container finished" podID="34a3604c-a8d7-4927-af88-a99eef3393fd" containerID="7b68d770aeba345a977e00542b5c3048b272132a8375f0a567002a93a75a06bf" exitCode=0 Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.306195 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" event={"ID":"34a3604c-a8d7-4927-af88-a99eef3393fd","Type":"ContainerDied","Data":"7b68d770aeba345a977e00542b5c3048b272132a8375f0a567002a93a75a06bf"} Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.306254 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" event={"ID":"34a3604c-a8d7-4927-af88-a99eef3393fd","Type":"ContainerStarted","Data":"7e0d520e1c06fb046e85e26addd85d4194290f857f06ccdcd0bff81d46f14ad4"} Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.307745 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-kdzww" event={"ID":"b76bf527-b2ca-4359-90f2-b9fdf5767d66","Type":"ContainerDied","Data":"e20a15d6255525614d6c8d20914fa2414d8b8e16b46c6ac3d712f4bfa8fc73d3"} Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.307843 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-kdzww" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.310142 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"68449649-bcc2-41c2-9a6a-a91452a48282","Type":"ContainerStarted","Data":"c9c667d219844555d6bce9968ae58455c2f6f3595aaa118d5c2e40588beea25b"} Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.313741 4919 generic.go:334] "Generic (PLEG): container finished" podID="3dab9193-1d11-452c-87e8-371ffd717dde" containerID="73ae277f668bede879420dacd80e1f6f68a8b7c0ea59bddd8372ebe348a2b448" exitCode=0 Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.313849 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" event={"ID":"3dab9193-1d11-452c-87e8-371ffd717dde","Type":"ContainerDied","Data":"73ae277f668bede879420dacd80e1f6f68a8b7c0ea59bddd8372ebe348a2b448"} Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.316001 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fdp27" event={"ID":"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6","Type":"ContainerStarted","Data":"1792d89409d74ccabcf980461f9c1058d54c75a12a3c7f7b3a5df38546da4936"} Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.316045 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fdp27" event={"ID":"9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6","Type":"ContainerStarted","Data":"73de124ab0c404990b2de5e4354687c1016edfc06e4798dc6d5ceae6d694f344"} Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.317221 4919 generic.go:334] "Generic (PLEG): container finished" podID="c5055335-3873-4f7a-87d3-bab319a9839c" containerID="528e884224d962d5c404dd72ee975e02cb7da0c07e6581a25d8a948cc092e8af" exitCode=0 Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.317498 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wb448" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.317462 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" event={"ID":"c5055335-3873-4f7a-87d3-bab319a9839c","Type":"ContainerDied","Data":"528e884224d962d5c404dd72ee975e02cb7da0c07e6581a25d8a948cc092e8af"} Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.317745 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" event={"ID":"c5055335-3873-4f7a-87d3-bab319a9839c","Type":"ContainerStarted","Data":"7d9e4e62b373dad2dddb647bea79732a603273d97dab1b6298001998fe01614d"} Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.376078 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-fdp27" podStartSLOduration=2.376063048 podStartE2EDuration="2.376063048s" podCreationTimestamp="2026-01-09 13:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:48:55.373899164 +0000 UTC m=+1114.921738624" watchObservedRunningTime="2026-01-09 13:48:55.376063048 +0000 UTC m=+1114.923902498" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.478268 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-wb448"] Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.483256 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:55 crc kubenswrapper[4919]: E0109 13:48:55.487021 4919 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 13:48:55 crc kubenswrapper[4919]: E0109 13:48:55.487038 4919 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 13:48:55 crc kubenswrapper[4919]: E0109 13:48:55.487079 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift podName:f55583f6-0518-4977-89a9-e4f12b0eae89 nodeName:}" failed. No retries permitted until 2026-01-09 13:48:57.487062937 +0000 UTC m=+1117.034902387 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift") pod "swift-storage-0" (UID: "f55583f6-0518-4977-89a9-e4f12b0eae89") : configmap "swift-ring-files" not found Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.496106 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-wb448"] Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.512817 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kdzww"] Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.521882 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kdzww"] Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.715136 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.787652 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48rdm\" (UniqueName: \"kubernetes.io/projected/3dab9193-1d11-452c-87e8-371ffd717dde-kube-api-access-48rdm\") pod \"3dab9193-1d11-452c-87e8-371ffd717dde\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.787728 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-config\") pod \"3dab9193-1d11-452c-87e8-371ffd717dde\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.787883 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-dns-svc\") pod \"3dab9193-1d11-452c-87e8-371ffd717dde\" (UID: \"3dab9193-1d11-452c-87e8-371ffd717dde\") " Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.793045 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dab9193-1d11-452c-87e8-371ffd717dde-kube-api-access-48rdm" (OuterVolumeSpecName: "kube-api-access-48rdm") pod "3dab9193-1d11-452c-87e8-371ffd717dde" (UID: "3dab9193-1d11-452c-87e8-371ffd717dde"). InnerVolumeSpecName "kube-api-access-48rdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.807504 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3dab9193-1d11-452c-87e8-371ffd717dde" (UID: "3dab9193-1d11-452c-87e8-371ffd717dde"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.809363 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-config" (OuterVolumeSpecName: "config") pod "3dab9193-1d11-452c-87e8-371ffd717dde" (UID: "3dab9193-1d11-452c-87e8-371ffd717dde"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.892652 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48rdm\" (UniqueName: \"kubernetes.io/projected/3dab9193-1d11-452c-87e8-371ffd717dde-kube-api-access-48rdm\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.893060 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.893072 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dab9193-1d11-452c-87e8-371ffd717dde-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.922949 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-75d2-account-create-update-rpcwm"] Jan 09 13:48:55 crc kubenswrapper[4919]: E0109 13:48:55.923495 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dab9193-1d11-452c-87e8-371ffd717dde" containerName="init" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.923516 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dab9193-1d11-452c-87e8-371ffd717dde" containerName="init" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.923753 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dab9193-1d11-452c-87e8-371ffd717dde" containerName="init" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.924470 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-75d2-account-create-update-rpcwm" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.933328 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.934400 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-75d2-account-create-update-rpcwm"] Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.987432 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-69v6c"] Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.988741 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-69v6c" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.994877 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76wn6\" (UniqueName: \"kubernetes.io/projected/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-kube-api-access-76wn6\") pod \"glance-75d2-account-create-update-rpcwm\" (UID: \"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0\") " pod="openstack/glance-75d2-account-create-update-rpcwm" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.994959 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-operator-scripts\") pod \"glance-75d2-account-create-update-rpcwm\" (UID: \"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0\") " pod="openstack/glance-75d2-account-create-update-rpcwm" Jan 09 13:48:55 crc kubenswrapper[4919]: I0109 13:48:55.997972 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-69v6c"] Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.096351 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76wn6\" (UniqueName: \"kubernetes.io/projected/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-kube-api-access-76wn6\") pod \"glance-75d2-account-create-update-rpcwm\" (UID: \"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0\") " pod="openstack/glance-75d2-account-create-update-rpcwm" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.096457 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb24524c-29a1-45e7-bbea-76c32b236d1d-operator-scripts\") pod \"glance-db-create-69v6c\" (UID: \"bb24524c-29a1-45e7-bbea-76c32b236d1d\") " pod="openstack/glance-db-create-69v6c" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.096507 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpcb\" (UniqueName: \"kubernetes.io/projected/bb24524c-29a1-45e7-bbea-76c32b236d1d-kube-api-access-rmpcb\") pod \"glance-db-create-69v6c\" (UID: \"bb24524c-29a1-45e7-bbea-76c32b236d1d\") " pod="openstack/glance-db-create-69v6c" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.096532 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-operator-scripts\") pod \"glance-75d2-account-create-update-rpcwm\" (UID: \"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0\") " pod="openstack/glance-75d2-account-create-update-rpcwm" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.097498 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-operator-scripts\") pod \"glance-75d2-account-create-update-rpcwm\" (UID: \"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0\") " pod="openstack/glance-75d2-account-create-update-rpcwm" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.115281 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76wn6\" (UniqueName: \"kubernetes.io/projected/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-kube-api-access-76wn6\") pod \"glance-75d2-account-create-update-rpcwm\" (UID: \"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0\") " pod="openstack/glance-75d2-account-create-update-rpcwm" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.198586 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmpcb\" (UniqueName: \"kubernetes.io/projected/bb24524c-29a1-45e7-bbea-76c32b236d1d-kube-api-access-rmpcb\") pod \"glance-db-create-69v6c\" (UID: \"bb24524c-29a1-45e7-bbea-76c32b236d1d\") " pod="openstack/glance-db-create-69v6c" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.198778 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb24524c-29a1-45e7-bbea-76c32b236d1d-operator-scripts\") pod \"glance-db-create-69v6c\" (UID: \"bb24524c-29a1-45e7-bbea-76c32b236d1d\") " pod="openstack/glance-db-create-69v6c" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.199804 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb24524c-29a1-45e7-bbea-76c32b236d1d-operator-scripts\") pod \"glance-db-create-69v6c\" (UID: \"bb24524c-29a1-45e7-bbea-76c32b236d1d\") " pod="openstack/glance-db-create-69v6c" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.217018 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmpcb\" (UniqueName: \"kubernetes.io/projected/bb24524c-29a1-45e7-bbea-76c32b236d1d-kube-api-access-rmpcb\") pod \"glance-db-create-69v6c\" (UID: \"bb24524c-29a1-45e7-bbea-76c32b236d1d\") " pod="openstack/glance-db-create-69v6c" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.282331 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-75d2-account-create-update-rpcwm" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.327673 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" event={"ID":"34a3604c-a8d7-4927-af88-a99eef3393fd","Type":"ContainerStarted","Data":"f3efcce0647688716c6cd941ec148e4d290c12981d2a1505de61c5cfc33c840b"} Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.328034 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.327845 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-69v6c" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.329805 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" event={"ID":"3dab9193-1d11-452c-87e8-371ffd717dde","Type":"ContainerDied","Data":"1491a094d06d0cca6d77ae609df656878fd52905ddb1aab106119063c7c681b5"} Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.329882 4919 scope.go:117] "RemoveContainer" containerID="73ae277f668bede879420dacd80e1f6f68a8b7c0ea59bddd8372ebe348a2b448" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.330010 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-xxdrb" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.385356 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" event={"ID":"c5055335-3873-4f7a-87d3-bab319a9839c","Type":"ContainerStarted","Data":"2e191cdc43721217f9b7f236c9a44e1dd98682e7e507c414263e9dd413901a58"} Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.387178 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" podStartSLOduration=3.387158542 podStartE2EDuration="3.387158542s" podCreationTimestamp="2026-01-09 13:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:48:56.347346163 +0000 UTC m=+1115.895185633" watchObservedRunningTime="2026-01-09 13:48:56.387158542 +0000 UTC m=+1115.934998162" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.417117 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" podStartSLOduration=2.950340085 podStartE2EDuration="3.417101347s" podCreationTimestamp="2026-01-09 13:48:53 +0000 UTC" firstStartedPulling="2026-01-09 13:48:54.664833089 +0000 UTC m=+1114.212672539" lastFinishedPulling="2026-01-09 13:48:55.131594351 +0000 UTC m=+1114.679433801" observedRunningTime="2026-01-09 13:48:56.411142368 +0000 UTC m=+1115.958981818" watchObservedRunningTime="2026-01-09 13:48:56.417101347 +0000 UTC m=+1115.964940797" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.459843 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-xxdrb"] Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.468202 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-xxdrb"] Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.763382 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36275c2d-b4fd-42de-ba91-b067ec9299c7" path="/var/lib/kubelet/pods/36275c2d-b4fd-42de-ba91-b067ec9299c7/volumes" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.764299 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dab9193-1d11-452c-87e8-371ffd717dde" path="/var/lib/kubelet/pods/3dab9193-1d11-452c-87e8-371ffd717dde/volumes" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.765119 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b76bf527-b2ca-4359-90f2-b9fdf5767d66" path="/var/lib/kubelet/pods/b76bf527-b2ca-4359-90f2-b9fdf5767d66/volumes" Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.770359 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-75d2-account-create-update-rpcwm"] Jan 09 13:48:56 crc kubenswrapper[4919]: I0109 13:48:56.872182 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-69v6c"] Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.335132 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lqc74"] Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.336665 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqc74" Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.339083 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.344000 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lqc74"] Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.399902 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-69v6c" event={"ID":"bb24524c-29a1-45e7-bbea-76c32b236d1d","Type":"ContainerStarted","Data":"785535cb288350778e32b34086653bb3a8c95948864a1d79155aae0821a687fd"} Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.399934 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-69v6c" event={"ID":"bb24524c-29a1-45e7-bbea-76c32b236d1d","Type":"ContainerStarted","Data":"1bddeb87f06ee6b431a9caef6809f5c29c85c8a50d4bd32f8aede534b38c7412"} Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.404275 4919 generic.go:334] "Generic (PLEG): container finished" podID="4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0" containerID="3f57e4304f8a937d060148f586b1ee9049ce09f06925a79ce28be71fe674db31" exitCode=0 Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.404354 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-75d2-account-create-update-rpcwm" event={"ID":"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0","Type":"ContainerDied","Data":"3f57e4304f8a937d060148f586b1ee9049ce09f06925a79ce28be71fe674db31"} Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.404382 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-75d2-account-create-update-rpcwm" event={"ID":"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0","Type":"ContainerStarted","Data":"cd21f2c49a93fd7a584a818abaccc0b20e81ef25ae5b04deb883cd1b54adb796"} Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.409539 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.418157 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-69v6c" podStartSLOduration=2.418139191 podStartE2EDuration="2.418139191s" podCreationTimestamp="2026-01-09 13:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:48:57.414786968 +0000 UTC m=+1116.962626438" watchObservedRunningTime="2026-01-09 13:48:57.418139191 +0000 UTC m=+1116.965978641" Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.432747 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e410b2e-f807-4330-a707-340e589cff69-operator-scripts\") pod \"root-account-create-update-lqc74\" (UID: \"0e410b2e-f807-4330-a707-340e589cff69\") " pod="openstack/root-account-create-update-lqc74" Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.432812 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4f5p\" (UniqueName: \"kubernetes.io/projected/0e410b2e-f807-4330-a707-340e589cff69-kube-api-access-j4f5p\") pod \"root-account-create-update-lqc74\" (UID: \"0e410b2e-f807-4330-a707-340e589cff69\") " pod="openstack/root-account-create-update-lqc74" Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.534040 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e410b2e-f807-4330-a707-340e589cff69-operator-scripts\") pod \"root-account-create-update-lqc74\" (UID: \"0e410b2e-f807-4330-a707-340e589cff69\") " pod="openstack/root-account-create-update-lqc74" Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.534089 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.534114 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4f5p\" (UniqueName: \"kubernetes.io/projected/0e410b2e-f807-4330-a707-340e589cff69-kube-api-access-j4f5p\") pod \"root-account-create-update-lqc74\" (UID: \"0e410b2e-f807-4330-a707-340e589cff69\") " pod="openstack/root-account-create-update-lqc74" Jan 09 13:48:57 crc kubenswrapper[4919]: E0109 13:48:57.535935 4919 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 13:48:57 crc kubenswrapper[4919]: E0109 13:48:57.535954 4919 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 13:48:57 crc kubenswrapper[4919]: E0109 13:48:57.535996 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift podName:f55583f6-0518-4977-89a9-e4f12b0eae89 nodeName:}" failed. No retries permitted until 2026-01-09 13:49:01.535982431 +0000 UTC m=+1121.083821881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift") pod "swift-storage-0" (UID: "f55583f6-0518-4977-89a9-e4f12b0eae89") : configmap "swift-ring-files" not found Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.543097 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e410b2e-f807-4330-a707-340e589cff69-operator-scripts\") pod \"root-account-create-update-lqc74\" (UID: \"0e410b2e-f807-4330-a707-340e589cff69\") " pod="openstack/root-account-create-update-lqc74" Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.560959 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4f5p\" (UniqueName: \"kubernetes.io/projected/0e410b2e-f807-4330-a707-340e589cff69-kube-api-access-j4f5p\") pod \"root-account-create-update-lqc74\" (UID: \"0e410b2e-f807-4330-a707-340e589cff69\") " pod="openstack/root-account-create-update-lqc74" Jan 09 13:48:57 crc kubenswrapper[4919]: I0109 13:48:57.661846 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqc74" Jan 09 13:48:58 crc kubenswrapper[4919]: I0109 13:48:58.419325 4919 generic.go:334] "Generic (PLEG): container finished" podID="bb24524c-29a1-45e7-bbea-76c32b236d1d" containerID="785535cb288350778e32b34086653bb3a8c95948864a1d79155aae0821a687fd" exitCode=0 Jan 09 13:48:58 crc kubenswrapper[4919]: I0109 13:48:58.420292 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-69v6c" event={"ID":"bb24524c-29a1-45e7-bbea-76c32b236d1d","Type":"ContainerDied","Data":"785535cb288350778e32b34086653bb3a8c95948864a1d79155aae0821a687fd"} Jan 09 13:48:59 crc kubenswrapper[4919]: I0109 13:48:59.979103 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-69v6c" Jan 09 13:48:59 crc kubenswrapper[4919]: I0109 13:48:59.986382 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-75d2-account-create-update-rpcwm" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.084826 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmpcb\" (UniqueName: \"kubernetes.io/projected/bb24524c-29a1-45e7-bbea-76c32b236d1d-kube-api-access-rmpcb\") pod \"bb24524c-29a1-45e7-bbea-76c32b236d1d\" (UID: \"bb24524c-29a1-45e7-bbea-76c32b236d1d\") " Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.085043 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb24524c-29a1-45e7-bbea-76c32b236d1d-operator-scripts\") pod \"bb24524c-29a1-45e7-bbea-76c32b236d1d\" (UID: \"bb24524c-29a1-45e7-bbea-76c32b236d1d\") " Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.085100 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76wn6\" (UniqueName: \"kubernetes.io/projected/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-kube-api-access-76wn6\") pod \"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0\" (UID: \"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0\") " Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.085172 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-operator-scripts\") pod \"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0\" (UID: \"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0\") " Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.085618 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb24524c-29a1-45e7-bbea-76c32b236d1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb24524c-29a1-45e7-bbea-76c32b236d1d" (UID: "bb24524c-29a1-45e7-bbea-76c32b236d1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.085946 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0" (UID: "4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.092436 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb24524c-29a1-45e7-bbea-76c32b236d1d-kube-api-access-rmpcb" (OuterVolumeSpecName: "kube-api-access-rmpcb") pod "bb24524c-29a1-45e7-bbea-76c32b236d1d" (UID: "bb24524c-29a1-45e7-bbea-76c32b236d1d"). InnerVolumeSpecName "kube-api-access-rmpcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.092483 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-kube-api-access-76wn6" (OuterVolumeSpecName: "kube-api-access-76wn6") pod "4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0" (UID: "4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0"). InnerVolumeSpecName "kube-api-access-76wn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.167413 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-69wcc"] Jan 09 13:49:00 crc kubenswrapper[4919]: E0109 13:49:00.168044 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0" containerName="mariadb-account-create-update" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.168068 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0" containerName="mariadb-account-create-update" Jan 09 13:49:00 crc kubenswrapper[4919]: E0109 13:49:00.168112 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb24524c-29a1-45e7-bbea-76c32b236d1d" containerName="mariadb-database-create" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.168118 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb24524c-29a1-45e7-bbea-76c32b236d1d" containerName="mariadb-database-create" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.168292 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb24524c-29a1-45e7-bbea-76c32b236d1d" containerName="mariadb-database-create" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.168314 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0" containerName="mariadb-account-create-update" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.168925 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-69wcc" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.176876 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-69wcc"] Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.192002 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76wn6\" (UniqueName: \"kubernetes.io/projected/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-kube-api-access-76wn6\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.192037 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.192048 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmpcb\" (UniqueName: \"kubernetes.io/projected/bb24524c-29a1-45e7-bbea-76c32b236d1d-kube-api-access-rmpcb\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.192057 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb24524c-29a1-45e7-bbea-76c32b236d1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.265263 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-dbd0-account-create-update-q42z2"] Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.266792 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dbd0-account-create-update-q42z2" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.269509 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.272121 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-dbd0-account-create-update-q42z2"] Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.293287 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5g45\" (UniqueName: \"kubernetes.io/projected/0d94fb93-5c21-4357-8efd-48b8285d4ad9-kube-api-access-r5g45\") pod \"keystone-db-create-69wcc\" (UID: \"0d94fb93-5c21-4357-8efd-48b8285d4ad9\") " pod="openstack/keystone-db-create-69wcc" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.293336 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d94fb93-5c21-4357-8efd-48b8285d4ad9-operator-scripts\") pod \"keystone-db-create-69wcc\" (UID: \"0d94fb93-5c21-4357-8efd-48b8285d4ad9\") " pod="openstack/keystone-db-create-69wcc" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.348086 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lqc74"] Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.395574 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svcxr\" (UniqueName: \"kubernetes.io/projected/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-kube-api-access-svcxr\") pod \"keystone-dbd0-account-create-update-q42z2\" (UID: \"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa\") " pod="openstack/keystone-dbd0-account-create-update-q42z2" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.395627 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5g45\" (UniqueName: \"kubernetes.io/projected/0d94fb93-5c21-4357-8efd-48b8285d4ad9-kube-api-access-r5g45\") pod \"keystone-db-create-69wcc\" (UID: \"0d94fb93-5c21-4357-8efd-48b8285d4ad9\") " pod="openstack/keystone-db-create-69wcc" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.395648 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-operator-scripts\") pod \"keystone-dbd0-account-create-update-q42z2\" (UID: \"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa\") " pod="openstack/keystone-dbd0-account-create-update-q42z2" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.395683 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d94fb93-5c21-4357-8efd-48b8285d4ad9-operator-scripts\") pod \"keystone-db-create-69wcc\" (UID: \"0d94fb93-5c21-4357-8efd-48b8285d4ad9\") " pod="openstack/keystone-db-create-69wcc" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.396415 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d94fb93-5c21-4357-8efd-48b8285d4ad9-operator-scripts\") pod \"keystone-db-create-69wcc\" (UID: \"0d94fb93-5c21-4357-8efd-48b8285d4ad9\") " pod="openstack/keystone-db-create-69wcc" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.414656 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5g45\" (UniqueName: \"kubernetes.io/projected/0d94fb93-5c21-4357-8efd-48b8285d4ad9-kube-api-access-r5g45\") pod \"keystone-db-create-69wcc\" (UID: \"0d94fb93-5c21-4357-8efd-48b8285d4ad9\") " pod="openstack/keystone-db-create-69wcc" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.446065 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lqc74" event={"ID":"0e410b2e-f807-4330-a707-340e589cff69","Type":"ContainerStarted","Data":"91a9c292794ab1785447134ec33698b5f102997fe7ab2e88e6810c832adf41b1"} Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.448095 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-75d2-account-create-update-rpcwm" event={"ID":"4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0","Type":"ContainerDied","Data":"cd21f2c49a93fd7a584a818abaccc0b20e81ef25ae5b04deb883cd1b54adb796"} Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.448128 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-75d2-account-create-update-rpcwm" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.448139 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd21f2c49a93fd7a584a818abaccc0b20e81ef25ae5b04deb883cd1b54adb796" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.449932 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7lmg7" event={"ID":"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7","Type":"ContainerStarted","Data":"90f92e4baabcb14e067fe05047c835af160ea0ac38b0a9a0b2b580ff5596777e"} Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.453112 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"68449649-bcc2-41c2-9a6a-a91452a48282","Type":"ContainerStarted","Data":"a5adcb16387891957c1e80038290745b70859f009c8996a6525d9092f0e714e3"} Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.455727 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-69v6c" event={"ID":"bb24524c-29a1-45e7-bbea-76c32b236d1d","Type":"ContainerDied","Data":"1bddeb87f06ee6b431a9caef6809f5c29c85c8a50d4bd32f8aede534b38c7412"} Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.455774 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bddeb87f06ee6b431a9caef6809f5c29c85c8a50d4bd32f8aede534b38c7412" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.455834 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-69v6c" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.463310 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wdlhd"] Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.466931 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdlhd" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.470448 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wdlhd"] Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.475355 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-7lmg7" podStartSLOduration=2.516692104 podStartE2EDuration="7.475339058s" podCreationTimestamp="2026-01-09 13:48:53 +0000 UTC" firstStartedPulling="2026-01-09 13:48:55.129661633 +0000 UTC m=+1114.677501083" lastFinishedPulling="2026-01-09 13:49:00.088308587 +0000 UTC m=+1119.636148037" observedRunningTime="2026-01-09 13:49:00.473015591 +0000 UTC m=+1120.020855041" watchObservedRunningTime="2026-01-09 13:49:00.475339058 +0000 UTC m=+1120.023178498" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.498947 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svcxr\" (UniqueName: \"kubernetes.io/projected/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-kube-api-access-svcxr\") pod \"keystone-dbd0-account-create-update-q42z2\" (UID: \"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa\") " pod="openstack/keystone-dbd0-account-create-update-q42z2" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.499028 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-operator-scripts\") pod \"keystone-dbd0-account-create-update-q42z2\" (UID: \"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa\") " pod="openstack/keystone-dbd0-account-create-update-q42z2" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.500075 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-operator-scripts\") pod \"keystone-dbd0-account-create-update-q42z2\" (UID: \"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa\") " pod="openstack/keystone-dbd0-account-create-update-q42z2" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.516912 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svcxr\" (UniqueName: \"kubernetes.io/projected/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-kube-api-access-svcxr\") pod \"keystone-dbd0-account-create-update-q42z2\" (UID: \"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa\") " pod="openstack/keystone-dbd0-account-create-update-q42z2" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.538153 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-69wcc" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.584522 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f265-account-create-update-zz4xh"] Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.588246 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f265-account-create-update-zz4xh" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.590585 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.591150 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dbd0-account-create-update-q42z2" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.601880 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x2w5\" (UniqueName: \"kubernetes.io/projected/f403837b-7672-496e-bdf1-9334074246bd-kube-api-access-8x2w5\") pod \"placement-db-create-wdlhd\" (UID: \"f403837b-7672-496e-bdf1-9334074246bd\") " pod="openstack/placement-db-create-wdlhd" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.602165 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f403837b-7672-496e-bdf1-9334074246bd-operator-scripts\") pod \"placement-db-create-wdlhd\" (UID: \"f403837b-7672-496e-bdf1-9334074246bd\") " pod="openstack/placement-db-create-wdlhd" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.604310 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f265-account-create-update-zz4xh"] Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.705582 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-operator-scripts\") pod \"placement-f265-account-create-update-zz4xh\" (UID: \"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9\") " pod="openstack/placement-f265-account-create-update-zz4xh" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.705994 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f403837b-7672-496e-bdf1-9334074246bd-operator-scripts\") pod \"placement-db-create-wdlhd\" (UID: \"f403837b-7672-496e-bdf1-9334074246bd\") " pod="openstack/placement-db-create-wdlhd" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.706114 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x2w5\" (UniqueName: \"kubernetes.io/projected/f403837b-7672-496e-bdf1-9334074246bd-kube-api-access-8x2w5\") pod \"placement-db-create-wdlhd\" (UID: \"f403837b-7672-496e-bdf1-9334074246bd\") " pod="openstack/placement-db-create-wdlhd" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.706275 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9dds\" (UniqueName: \"kubernetes.io/projected/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-kube-api-access-l9dds\") pod \"placement-f265-account-create-update-zz4xh\" (UID: \"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9\") " pod="openstack/placement-f265-account-create-update-zz4xh" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.707103 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f403837b-7672-496e-bdf1-9334074246bd-operator-scripts\") pod \"placement-db-create-wdlhd\" (UID: \"f403837b-7672-496e-bdf1-9334074246bd\") " pod="openstack/placement-db-create-wdlhd" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.731256 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x2w5\" (UniqueName: \"kubernetes.io/projected/f403837b-7672-496e-bdf1-9334074246bd-kube-api-access-8x2w5\") pod \"placement-db-create-wdlhd\" (UID: \"f403837b-7672-496e-bdf1-9334074246bd\") " pod="openstack/placement-db-create-wdlhd" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.790081 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdlhd" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.810416 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9dds\" (UniqueName: \"kubernetes.io/projected/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-kube-api-access-l9dds\") pod \"placement-f265-account-create-update-zz4xh\" (UID: \"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9\") " pod="openstack/placement-f265-account-create-update-zz4xh" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.810524 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-operator-scripts\") pod \"placement-f265-account-create-update-zz4xh\" (UID: \"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9\") " pod="openstack/placement-f265-account-create-update-zz4xh" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.812016 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-operator-scripts\") pod \"placement-f265-account-create-update-zz4xh\" (UID: \"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9\") " pod="openstack/placement-f265-account-create-update-zz4xh" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.832418 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9dds\" (UniqueName: \"kubernetes.io/projected/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-kube-api-access-l9dds\") pod \"placement-f265-account-create-update-zz4xh\" (UID: \"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9\") " pod="openstack/placement-f265-account-create-update-zz4xh" Jan 09 13:49:00 crc kubenswrapper[4919]: I0109 13:49:00.923174 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f265-account-create-update-zz4xh" Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.015594 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-69wcc"] Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.155264 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-dbd0-account-create-update-q42z2"] Jan 09 13:49:01 crc kubenswrapper[4919]: W0109 13:49:01.174861 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73a3d3cb_e4d2_4d33_8c46_27b6afa433fa.slice/crio-09bdd58d8829ee4bad94198ca7d092868e83404d6ef92ccfc97c1a548fecbb0d WatchSource:0}: Error finding container 09bdd58d8829ee4bad94198ca7d092868e83404d6ef92ccfc97c1a548fecbb0d: Status 404 returned error can't find the container with id 09bdd58d8829ee4bad94198ca7d092868e83404d6ef92ccfc97c1a548fecbb0d Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.341881 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wdlhd"] Jan 09 13:49:01 crc kubenswrapper[4919]: W0109 13:49:01.345560 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf403837b_7672_496e_bdf1_9334074246bd.slice/crio-131632be37cbd87c64c80b7f5b0e9beb187fb36815668279a314c1dc0ea73c7c WatchSource:0}: Error finding container 131632be37cbd87c64c80b7f5b0e9beb187fb36815668279a314c1dc0ea73c7c: Status 404 returned error can't find the container with id 131632be37cbd87c64c80b7f5b0e9beb187fb36815668279a314c1dc0ea73c7c Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.452761 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f265-account-create-update-zz4xh"] Jan 09 13:49:01 crc kubenswrapper[4919]: W0109 13:49:01.464204 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbc589b0_3f7a_45c0_9fcd_1f69573d79c9.slice/crio-f3909d7d2b74ab39ebb95c787270db8406d8360b4b1e3776a80e5510e0cd239a WatchSource:0}: Error finding container f3909d7d2b74ab39ebb95c787270db8406d8360b4b1e3776a80e5510e0cd239a: Status 404 returned error can't find the container with id f3909d7d2b74ab39ebb95c787270db8406d8360b4b1e3776a80e5510e0cd239a Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.467653 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wdlhd" event={"ID":"f403837b-7672-496e-bdf1-9334074246bd","Type":"ContainerStarted","Data":"131632be37cbd87c64c80b7f5b0e9beb187fb36815668279a314c1dc0ea73c7c"} Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.470769 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"68449649-bcc2-41c2-9a6a-a91452a48282","Type":"ContainerStarted","Data":"71d499f91dd50daf2fc0c4fc66d129a5678b5e5d0869205f6b18676bb6be8108"} Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.470863 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.472733 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dbd0-account-create-update-q42z2" event={"ID":"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa","Type":"ContainerStarted","Data":"8f151431ca255d2c6528d9ba76af0fc54cfcf4ab5f8a1e66bff6682d28fb8fe1"} Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.472772 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dbd0-account-create-update-q42z2" event={"ID":"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa","Type":"ContainerStarted","Data":"09bdd58d8829ee4bad94198ca7d092868e83404d6ef92ccfc97c1a548fecbb0d"} Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.476813 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-69wcc" event={"ID":"0d94fb93-5c21-4357-8efd-48b8285d4ad9","Type":"ContainerStarted","Data":"0d1162bd1f721137f911208bd44b21b01d8830967bcd1a09377bd62b195657d7"} Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.476870 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-69wcc" event={"ID":"0d94fb93-5c21-4357-8efd-48b8285d4ad9","Type":"ContainerStarted","Data":"ead7a676fa70346db7b3d10f141632bc8da72d49b6ff1691bf0ef809bc4330a8"} Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.479595 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lqc74" event={"ID":"0e410b2e-f807-4330-a707-340e589cff69","Type":"ContainerDied","Data":"aeb591faed3e8b6661b2747d68f4d7f79c02dfcd1a7759b2fe97932028f3c862"} Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.479535 4919 generic.go:334] "Generic (PLEG): container finished" podID="0e410b2e-f807-4330-a707-340e589cff69" containerID="aeb591faed3e8b6661b2747d68f4d7f79c02dfcd1a7759b2fe97932028f3c862" exitCode=0 Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.505405 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.832854799 podStartE2EDuration="8.505382404s" podCreationTimestamp="2026-01-09 13:48:53 +0000 UTC" firstStartedPulling="2026-01-09 13:48:54.887125734 +0000 UTC m=+1114.434965184" lastFinishedPulling="2026-01-09 13:48:57.559653339 +0000 UTC m=+1117.107492789" observedRunningTime="2026-01-09 13:49:01.498882582 +0000 UTC m=+1121.046722042" watchObservedRunningTime="2026-01-09 13:49:01.505382404 +0000 UTC m=+1121.053221854" Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.518762 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-dbd0-account-create-update-q42z2" podStartSLOduration=1.5187421859999999 podStartE2EDuration="1.518742186s" podCreationTimestamp="2026-01-09 13:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:49:01.518033018 +0000 UTC m=+1121.065872478" watchObservedRunningTime="2026-01-09 13:49:01.518742186 +0000 UTC m=+1121.066581626" Jan 09 13:49:01 crc kubenswrapper[4919]: I0109 13:49:01.624483 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:49:01 crc kubenswrapper[4919]: E0109 13:49:01.625155 4919 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 13:49:01 crc kubenswrapper[4919]: E0109 13:49:01.625183 4919 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 13:49:01 crc kubenswrapper[4919]: E0109 13:49:01.625245 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift podName:f55583f6-0518-4977-89a9-e4f12b0eae89 nodeName:}" failed. No retries permitted until 2026-01-09 13:49:09.625225463 +0000 UTC m=+1129.173065123 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift") pod "swift-storage-0" (UID: "f55583f6-0518-4977-89a9-e4f12b0eae89") : configmap "swift-ring-files" not found Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.490376 4919 generic.go:334] "Generic (PLEG): container finished" podID="0d94fb93-5c21-4357-8efd-48b8285d4ad9" containerID="0d1162bd1f721137f911208bd44b21b01d8830967bcd1a09377bd62b195657d7" exitCode=0 Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.490470 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-69wcc" event={"ID":"0d94fb93-5c21-4357-8efd-48b8285d4ad9","Type":"ContainerDied","Data":"0d1162bd1f721137f911208bd44b21b01d8830967bcd1a09377bd62b195657d7"} Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.492974 4919 generic.go:334] "Generic (PLEG): container finished" podID="f403837b-7672-496e-bdf1-9334074246bd" containerID="69dd38811a1173c5c197d07abe7e1bbff59c4c62832138201846f5d4382975c0" exitCode=0 Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.493033 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wdlhd" event={"ID":"f403837b-7672-496e-bdf1-9334074246bd","Type":"ContainerDied","Data":"69dd38811a1173c5c197d07abe7e1bbff59c4c62832138201846f5d4382975c0"} Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.495723 4919 generic.go:334] "Generic (PLEG): container finished" podID="fbc589b0-3f7a-45c0-9fcd-1f69573d79c9" containerID="5d0022ac857bc93d078cd1306472d3c440f2053e8cbaee0bf8a8f6f0da1eee88" exitCode=0 Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.495769 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f265-account-create-update-zz4xh" event={"ID":"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9","Type":"ContainerDied","Data":"5d0022ac857bc93d078cd1306472d3c440f2053e8cbaee0bf8a8f6f0da1eee88"} Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.495853 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f265-account-create-update-zz4xh" event={"ID":"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9","Type":"ContainerStarted","Data":"f3909d7d2b74ab39ebb95c787270db8406d8360b4b1e3776a80e5510e0cd239a"} Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.498901 4919 generic.go:334] "Generic (PLEG): container finished" podID="73a3d3cb-e4d2-4d33-8c46-27b6afa433fa" containerID="8f151431ca255d2c6528d9ba76af0fc54cfcf4ab5f8a1e66bff6682d28fb8fe1" exitCode=0 Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.498980 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dbd0-account-create-update-q42z2" event={"ID":"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa","Type":"ContainerDied","Data":"8f151431ca255d2c6528d9ba76af0fc54cfcf4ab5f8a1e66bff6682d28fb8fe1"} Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.919284 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-69wcc" Jan 09 13:49:02 crc kubenswrapper[4919]: I0109 13:49:02.925134 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqc74" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.055578 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d94fb93-5c21-4357-8efd-48b8285d4ad9-operator-scripts\") pod \"0d94fb93-5c21-4357-8efd-48b8285d4ad9\" (UID: \"0d94fb93-5c21-4357-8efd-48b8285d4ad9\") " Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.055642 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e410b2e-f807-4330-a707-340e589cff69-operator-scripts\") pod \"0e410b2e-f807-4330-a707-340e589cff69\" (UID: \"0e410b2e-f807-4330-a707-340e589cff69\") " Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.055860 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5g45\" (UniqueName: \"kubernetes.io/projected/0d94fb93-5c21-4357-8efd-48b8285d4ad9-kube-api-access-r5g45\") pod \"0d94fb93-5c21-4357-8efd-48b8285d4ad9\" (UID: \"0d94fb93-5c21-4357-8efd-48b8285d4ad9\") " Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.055939 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4f5p\" (UniqueName: \"kubernetes.io/projected/0e410b2e-f807-4330-a707-340e589cff69-kube-api-access-j4f5p\") pod \"0e410b2e-f807-4330-a707-340e589cff69\" (UID: \"0e410b2e-f807-4330-a707-340e589cff69\") " Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.058390 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d94fb93-5c21-4357-8efd-48b8285d4ad9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d94fb93-5c21-4357-8efd-48b8285d4ad9" (UID: "0d94fb93-5c21-4357-8efd-48b8285d4ad9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.058785 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e410b2e-f807-4330-a707-340e589cff69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e410b2e-f807-4330-a707-340e589cff69" (UID: "0e410b2e-f807-4330-a707-340e589cff69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.064151 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d94fb93-5c21-4357-8efd-48b8285d4ad9-kube-api-access-r5g45" (OuterVolumeSpecName: "kube-api-access-r5g45") pod "0d94fb93-5c21-4357-8efd-48b8285d4ad9" (UID: "0d94fb93-5c21-4357-8efd-48b8285d4ad9"). InnerVolumeSpecName "kube-api-access-r5g45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.064613 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e410b2e-f807-4330-a707-340e589cff69-kube-api-access-j4f5p" (OuterVolumeSpecName: "kube-api-access-j4f5p") pod "0e410b2e-f807-4330-a707-340e589cff69" (UID: "0e410b2e-f807-4330-a707-340e589cff69"). InnerVolumeSpecName "kube-api-access-j4f5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.157666 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4f5p\" (UniqueName: \"kubernetes.io/projected/0e410b2e-f807-4330-a707-340e589cff69-kube-api-access-j4f5p\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.158121 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d94fb93-5c21-4357-8efd-48b8285d4ad9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.158186 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e410b2e-f807-4330-a707-340e589cff69-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.158325 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5g45\" (UniqueName: \"kubernetes.io/projected/0d94fb93-5c21-4357-8efd-48b8285d4ad9-kube-api-access-r5g45\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.508084 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-69wcc" event={"ID":"0d94fb93-5c21-4357-8efd-48b8285d4ad9","Type":"ContainerDied","Data":"ead7a676fa70346db7b3d10f141632bc8da72d49b6ff1691bf0ef809bc4330a8"} Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.508134 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ead7a676fa70346db7b3d10f141632bc8da72d49b6ff1691bf0ef809bc4330a8" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.508188 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-69wcc" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.511366 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lqc74" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.514164 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lqc74" event={"ID":"0e410b2e-f807-4330-a707-340e589cff69","Type":"ContainerDied","Data":"91a9c292794ab1785447134ec33698b5f102997fe7ab2e88e6810c832adf41b1"} Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.514203 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a9c292794ab1785447134ec33698b5f102997fe7ab2e88e6810c832adf41b1" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.875414 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdlhd" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.948348 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.970860 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dbd0-account-create-update-q42z2" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.982858 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x2w5\" (UniqueName: \"kubernetes.io/projected/f403837b-7672-496e-bdf1-9334074246bd-kube-api-access-8x2w5\") pod \"f403837b-7672-496e-bdf1-9334074246bd\" (UID: \"f403837b-7672-496e-bdf1-9334074246bd\") " Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.983097 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f403837b-7672-496e-bdf1-9334074246bd-operator-scripts\") pod \"f403837b-7672-496e-bdf1-9334074246bd\" (UID: \"f403837b-7672-496e-bdf1-9334074246bd\") " Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.983863 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f403837b-7672-496e-bdf1-9334074246bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f403837b-7672-496e-bdf1-9334074246bd" (UID: "f403837b-7672-496e-bdf1-9334074246bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.985061 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f265-account-create-update-zz4xh" Jan 09 13:49:03 crc kubenswrapper[4919]: I0109 13:49:03.986573 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f403837b-7672-496e-bdf1-9334074246bd-kube-api-access-8x2w5" (OuterVolumeSpecName: "kube-api-access-8x2w5") pod "f403837b-7672-496e-bdf1-9334074246bd" (UID: "f403837b-7672-496e-bdf1-9334074246bd"). InnerVolumeSpecName "kube-api-access-8x2w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.084254 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svcxr\" (UniqueName: \"kubernetes.io/projected/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-kube-api-access-svcxr\") pod \"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa\" (UID: \"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa\") " Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.084367 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9dds\" (UniqueName: \"kubernetes.io/projected/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-kube-api-access-l9dds\") pod \"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9\" (UID: \"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9\") " Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.084434 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-operator-scripts\") pod \"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa\" (UID: \"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa\") " Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.084534 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-operator-scripts\") pod \"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9\" (UID: \"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9\") " Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.085178 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f403837b-7672-496e-bdf1-9334074246bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.085203 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x2w5\" (UniqueName: \"kubernetes.io/projected/f403837b-7672-496e-bdf1-9334074246bd-kube-api-access-8x2w5\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.085968 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73a3d3cb-e4d2-4d33-8c46-27b6afa433fa" (UID: "73a3d3cb-e4d2-4d33-8c46-27b6afa433fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.086950 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fbc589b0-3f7a-45c0-9fcd-1f69573d79c9" (UID: "fbc589b0-3f7a-45c0-9fcd-1f69573d79c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.086966 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-kube-api-access-svcxr" (OuterVolumeSpecName: "kube-api-access-svcxr") pod "73a3d3cb-e4d2-4d33-8c46-27b6afa433fa" (UID: "73a3d3cb-e4d2-4d33-8c46-27b6afa433fa"). InnerVolumeSpecName "kube-api-access-svcxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.087696 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-kube-api-access-l9dds" (OuterVolumeSpecName: "kube-api-access-l9dds") pod "fbc589b0-3f7a-45c0-9fcd-1f69573d79c9" (UID: "fbc589b0-3f7a-45c0-9fcd-1f69573d79c9"). InnerVolumeSpecName "kube-api-access-l9dds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.187158 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svcxr\" (UniqueName: \"kubernetes.io/projected/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-kube-api-access-svcxr\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.187201 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9dds\" (UniqueName: \"kubernetes.io/projected/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-kube-api-access-l9dds\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.187234 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.187246 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.375423 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.461066 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-lhw5c"] Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.523150 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wdlhd" event={"ID":"f403837b-7672-496e-bdf1-9334074246bd","Type":"ContainerDied","Data":"131632be37cbd87c64c80b7f5b0e9beb187fb36815668279a314c1dc0ea73c7c"} Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.523230 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="131632be37cbd87c64c80b7f5b0e9beb187fb36815668279a314c1dc0ea73c7c" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.523181 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdlhd" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.524525 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f265-account-create-update-zz4xh" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.524540 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f265-account-create-update-zz4xh" event={"ID":"fbc589b0-3f7a-45c0-9fcd-1f69573d79c9","Type":"ContainerDied","Data":"f3909d7d2b74ab39ebb95c787270db8406d8360b4b1e3776a80e5510e0cd239a"} Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.524659 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3909d7d2b74ab39ebb95c787270db8406d8360b4b1e3776a80e5510e0cd239a" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.526089 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dbd0-account-create-update-q42z2" event={"ID":"73a3d3cb-e4d2-4d33-8c46-27b6afa433fa","Type":"ContainerDied","Data":"09bdd58d8829ee4bad94198ca7d092868e83404d6ef92ccfc97c1a548fecbb0d"} Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.526117 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09bdd58d8829ee4bad94198ca7d092868e83404d6ef92ccfc97c1a548fecbb0d" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.526102 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dbd0-account-create-update-q42z2" Jan 09 13:49:04 crc kubenswrapper[4919]: I0109 13:49:04.526254 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" podUID="c5055335-3873-4f7a-87d3-bab319a9839c" containerName="dnsmasq-dns" containerID="cri-o://2e191cdc43721217f9b7f236c9a44e1dd98682e7e507c414263e9dd413901a58" gracePeriod=10 Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.107229 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wsjrx"] Jan 09 13:49:06 crc kubenswrapper[4919]: E0109 13:49:06.108171 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a3d3cb-e4d2-4d33-8c46-27b6afa433fa" containerName="mariadb-account-create-update" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.108201 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a3d3cb-e4d2-4d33-8c46-27b6afa433fa" containerName="mariadb-account-create-update" Jan 09 13:49:06 crc kubenswrapper[4919]: E0109 13:49:06.108312 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d94fb93-5c21-4357-8efd-48b8285d4ad9" containerName="mariadb-database-create" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.108322 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d94fb93-5c21-4357-8efd-48b8285d4ad9" containerName="mariadb-database-create" Jan 09 13:49:06 crc kubenswrapper[4919]: E0109 13:49:06.108357 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e410b2e-f807-4330-a707-340e589cff69" containerName="mariadb-account-create-update" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.108385 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e410b2e-f807-4330-a707-340e589cff69" containerName="mariadb-account-create-update" Jan 09 13:49:06 crc kubenswrapper[4919]: E0109 13:49:06.108422 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc589b0-3f7a-45c0-9fcd-1f69573d79c9" containerName="mariadb-account-create-update" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.108431 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc589b0-3f7a-45c0-9fcd-1f69573d79c9" containerName="mariadb-account-create-update" Jan 09 13:49:06 crc kubenswrapper[4919]: E0109 13:49:06.108492 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f403837b-7672-496e-bdf1-9334074246bd" containerName="mariadb-database-create" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.108501 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f403837b-7672-496e-bdf1-9334074246bd" containerName="mariadb-database-create" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.108876 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e410b2e-f807-4330-a707-340e589cff69" containerName="mariadb-account-create-update" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.108940 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f403837b-7672-496e-bdf1-9334074246bd" containerName="mariadb-database-create" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.108956 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="73a3d3cb-e4d2-4d33-8c46-27b6afa433fa" containerName="mariadb-account-create-update" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.108987 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc589b0-3f7a-45c0-9fcd-1f69573d79c9" containerName="mariadb-account-create-update" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.109019 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d94fb93-5c21-4357-8efd-48b8285d4ad9" containerName="mariadb-database-create" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.110110 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.112352 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qpdkt" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.116701 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.125331 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wsjrx"] Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.222751 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-config-data\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.222860 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpmnj\" (UniqueName: \"kubernetes.io/projected/15490209-86af-4f77-8103-27d097279b7d-kube-api-access-rpmnj\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.222928 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-db-sync-config-data\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.222977 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-combined-ca-bundle\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.324162 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-combined-ca-bundle\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.324293 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-config-data\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.324370 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpmnj\" (UniqueName: \"kubernetes.io/projected/15490209-86af-4f77-8103-27d097279b7d-kube-api-access-rpmnj\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.324414 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-db-sync-config-data\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.330009 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-config-data\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.330032 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-db-sync-config-data\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.331666 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-combined-ca-bundle\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.346121 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpmnj\" (UniqueName: \"kubernetes.io/projected/15490209-86af-4f77-8103-27d097279b7d-kube-api-access-rpmnj\") pod \"glance-db-sync-wsjrx\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.428237 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.563398 4919 generic.go:334] "Generic (PLEG): container finished" podID="c5055335-3873-4f7a-87d3-bab319a9839c" containerID="2e191cdc43721217f9b7f236c9a44e1dd98682e7e507c414263e9dd413901a58" exitCode=0 Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.563444 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" event={"ID":"c5055335-3873-4f7a-87d3-bab319a9839c","Type":"ContainerDied","Data":"2e191cdc43721217f9b7f236c9a44e1dd98682e7e507c414263e9dd413901a58"} Jan 09 13:49:06 crc kubenswrapper[4919]: I0109 13:49:06.878422 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.035277 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-dns-svc\") pod \"c5055335-3873-4f7a-87d3-bab319a9839c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.035387 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-ovsdbserver-nb\") pod \"c5055335-3873-4f7a-87d3-bab319a9839c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.035455 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf7lw\" (UniqueName: \"kubernetes.io/projected/c5055335-3873-4f7a-87d3-bab319a9839c-kube-api-access-sf7lw\") pod \"c5055335-3873-4f7a-87d3-bab319a9839c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.035497 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-config\") pod \"c5055335-3873-4f7a-87d3-bab319a9839c\" (UID: \"c5055335-3873-4f7a-87d3-bab319a9839c\") " Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.039341 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5055335-3873-4f7a-87d3-bab319a9839c-kube-api-access-sf7lw" (OuterVolumeSpecName: "kube-api-access-sf7lw") pod "c5055335-3873-4f7a-87d3-bab319a9839c" (UID: "c5055335-3873-4f7a-87d3-bab319a9839c"). InnerVolumeSpecName "kube-api-access-sf7lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.071694 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5055335-3873-4f7a-87d3-bab319a9839c" (UID: "c5055335-3873-4f7a-87d3-bab319a9839c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.074542 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5055335-3873-4f7a-87d3-bab319a9839c" (UID: "c5055335-3873-4f7a-87d3-bab319a9839c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.081728 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-config" (OuterVolumeSpecName: "config") pod "c5055335-3873-4f7a-87d3-bab319a9839c" (UID: "c5055335-3873-4f7a-87d3-bab319a9839c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.109446 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wsjrx"] Jan 09 13:49:07 crc kubenswrapper[4919]: W0109 13:49:07.111286 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15490209_86af_4f77_8103_27d097279b7d.slice/crio-ff0ea4e9fde8557155a1b211da65fb8be66cbf84d15a378962d6bf4fdd200ec2 WatchSource:0}: Error finding container ff0ea4e9fde8557155a1b211da65fb8be66cbf84d15a378962d6bf4fdd200ec2: Status 404 returned error can't find the container with id ff0ea4e9fde8557155a1b211da65fb8be66cbf84d15a378962d6bf4fdd200ec2 Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.137304 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.137342 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.137355 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf7lw\" (UniqueName: \"kubernetes.io/projected/c5055335-3873-4f7a-87d3-bab319a9839c-kube-api-access-sf7lw\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.137366 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5055335-3873-4f7a-87d3-bab319a9839c-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.572266 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wsjrx" event={"ID":"15490209-86af-4f77-8103-27d097279b7d","Type":"ContainerStarted","Data":"ff0ea4e9fde8557155a1b211da65fb8be66cbf84d15a378962d6bf4fdd200ec2"} Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.574092 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" event={"ID":"c5055335-3873-4f7a-87d3-bab319a9839c","Type":"ContainerDied","Data":"7d9e4e62b373dad2dddb647bea79732a603273d97dab1b6298001998fe01614d"} Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.574231 4919 scope.go:117] "RemoveContainer" containerID="2e191cdc43721217f9b7f236c9a44e1dd98682e7e507c414263e9dd413901a58" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.574406 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c554cfdf-lhw5c" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.592580 4919 scope.go:117] "RemoveContainer" containerID="528e884224d962d5c404dd72ee975e02cb7da0c07e6581a25d8a948cc092e8af" Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.611112 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-lhw5c"] Jan 09 13:49:07 crc kubenswrapper[4919]: I0109 13:49:07.619008 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-lhw5c"] Jan 09 13:49:08 crc kubenswrapper[4919]: I0109 13:49:08.764747 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5055335-3873-4f7a-87d3-bab319a9839c" path="/var/lib/kubelet/pods/c5055335-3873-4f7a-87d3-bab319a9839c/volumes" Jan 09 13:49:08 crc kubenswrapper[4919]: I0109 13:49:08.992238 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lqc74"] Jan 09 13:49:08 crc kubenswrapper[4919]: I0109 13:49:08.997635 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lqc74"] Jan 09 13:49:09 crc kubenswrapper[4919]: I0109 13:49:09.680966 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:49:09 crc kubenswrapper[4919]: E0109 13:49:09.681034 4919 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 13:49:09 crc kubenswrapper[4919]: E0109 13:49:09.681483 4919 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 13:49:09 crc kubenswrapper[4919]: E0109 13:49:09.681535 4919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift podName:f55583f6-0518-4977-89a9-e4f12b0eae89 nodeName:}" failed. No retries permitted until 2026-01-09 13:49:25.681521269 +0000 UTC m=+1145.229360719 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift") pod "swift-storage-0" (UID: "f55583f6-0518-4977-89a9-e4f12b0eae89") : configmap "swift-ring-files" not found Jan 09 13:49:10 crc kubenswrapper[4919]: I0109 13:49:10.764630 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e410b2e-f807-4330-a707-340e589cff69" path="/var/lib/kubelet/pods/0e410b2e-f807-4330-a707-340e589cff69/volumes" Jan 09 13:49:12 crc kubenswrapper[4919]: I0109 13:49:12.624343 4919 generic.go:334] "Generic (PLEG): container finished" podID="b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" containerID="90f92e4baabcb14e067fe05047c835af160ea0ac38b0a9a0b2b580ff5596777e" exitCode=0 Jan 09 13:49:12 crc kubenswrapper[4919]: I0109 13:49:12.624434 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7lmg7" event={"ID":"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7","Type":"ContainerDied","Data":"90f92e4baabcb14e067fe05047c835af160ea0ac38b0a9a0b2b580ff5596777e"} Jan 09 13:49:13 crc kubenswrapper[4919]: I0109 13:49:13.976833 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.025059 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fvgzp"] Jan 09 13:49:14 crc kubenswrapper[4919]: E0109 13:49:14.025707 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" containerName="swift-ring-rebalance" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.025724 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" containerName="swift-ring-rebalance" Jan 09 13:49:14 crc kubenswrapper[4919]: E0109 13:49:14.025744 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5055335-3873-4f7a-87d3-bab319a9839c" containerName="dnsmasq-dns" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.025751 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5055335-3873-4f7a-87d3-bab319a9839c" containerName="dnsmasq-dns" Jan 09 13:49:14 crc kubenswrapper[4919]: E0109 13:49:14.025769 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5055335-3873-4f7a-87d3-bab319a9839c" containerName="init" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.025775 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5055335-3873-4f7a-87d3-bab319a9839c" containerName="init" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.025921 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" containerName="swift-ring-rebalance" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.025941 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5055335-3873-4f7a-87d3-bab319a9839c" containerName="dnsmasq-dns" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.026484 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fvgzp" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.030049 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.037965 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fvgzp"] Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.068852 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8rtr\" (UniqueName: \"kubernetes.io/projected/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-kube-api-access-x8rtr\") pod \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.068916 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-dispersionconf\") pod \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.068957 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-scripts\") pod \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.068992 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-ring-data-devices\") pod \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.069070 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-etc-swift\") pod \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.069102 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-swiftconf\") pod \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.069199 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-combined-ca-bundle\") pod \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\" (UID: \"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7\") " Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.069966 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" (UID: "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.070856 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" (UID: "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.077478 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-kube-api-access-x8rtr" (OuterVolumeSpecName: "kube-api-access-x8rtr") pod "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" (UID: "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7"). InnerVolumeSpecName "kube-api-access-x8rtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.078356 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" (UID: "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.092269 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" (UID: "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.092609 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" (UID: "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.096032 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-scripts" (OuterVolumeSpecName: "scripts") pod "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7" (UID: "b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.122667 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.171043 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb00094e-7c2c-45a0-b671-5d62017a9949-operator-scripts\") pod \"root-account-create-update-fvgzp\" (UID: \"cb00094e-7c2c-45a0-b671-5d62017a9949\") " pod="openstack/root-account-create-update-fvgzp" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.171108 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzbk8\" (UniqueName: \"kubernetes.io/projected/cb00094e-7c2c-45a0-b671-5d62017a9949-kube-api-access-rzbk8\") pod \"root-account-create-update-fvgzp\" (UID: \"cb00094e-7c2c-45a0-b671-5d62017a9949\") " pod="openstack/root-account-create-update-fvgzp" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.171434 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8rtr\" (UniqueName: \"kubernetes.io/projected/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-kube-api-access-x8rtr\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.171450 4919 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.171463 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.171473 4919 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.171481 4919 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.171552 4919 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.171564 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.273056 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb00094e-7c2c-45a0-b671-5d62017a9949-operator-scripts\") pod \"root-account-create-update-fvgzp\" (UID: \"cb00094e-7c2c-45a0-b671-5d62017a9949\") " pod="openstack/root-account-create-update-fvgzp" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.273398 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzbk8\" (UniqueName: \"kubernetes.io/projected/cb00094e-7c2c-45a0-b671-5d62017a9949-kube-api-access-rzbk8\") pod \"root-account-create-update-fvgzp\" (UID: \"cb00094e-7c2c-45a0-b671-5d62017a9949\") " pod="openstack/root-account-create-update-fvgzp" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.274859 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb00094e-7c2c-45a0-b671-5d62017a9949-operator-scripts\") pod \"root-account-create-update-fvgzp\" (UID: \"cb00094e-7c2c-45a0-b671-5d62017a9949\") " pod="openstack/root-account-create-update-fvgzp" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.293424 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzbk8\" (UniqueName: \"kubernetes.io/projected/cb00094e-7c2c-45a0-b671-5d62017a9949-kube-api-access-rzbk8\") pod \"root-account-create-update-fvgzp\" (UID: \"cb00094e-7c2c-45a0-b671-5d62017a9949\") " pod="openstack/root-account-create-update-fvgzp" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.340760 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fvgzp" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.643618 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7lmg7" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.643591 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7lmg7" event={"ID":"b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7","Type":"ContainerDied","Data":"0934ed7279ac2bd8a4fb81ec259cee963324cde4b909fd629666874a8747e39b"} Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.643994 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0934ed7279ac2bd8a4fb81ec259cee963324cde4b909fd629666874a8747e39b" Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.646318 4919 generic.go:334] "Generic (PLEG): container finished" podID="9b80a84d-c869-407b-b3d2-3be828183ae5" containerID="94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50" exitCode=0 Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.646357 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b80a84d-c869-407b-b3d2-3be828183ae5","Type":"ContainerDied","Data":"94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50"} Jan 09 13:49:14 crc kubenswrapper[4919]: I0109 13:49:14.814603 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fvgzp"] Jan 09 13:49:14 crc kubenswrapper[4919]: W0109 13:49:14.821580 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb00094e_7c2c_45a0_b671_5d62017a9949.slice/crio-7dd244fce2d78fb88aa03cb09d3aea7451e5a1ad738d5c03c9608ce061760c97 WatchSource:0}: Error finding container 7dd244fce2d78fb88aa03cb09d3aea7451e5a1ad738d5c03c9608ce061760c97: Status 404 returned error can't find the container with id 7dd244fce2d78fb88aa03cb09d3aea7451e5a1ad738d5c03c9608ce061760c97 Jan 09 13:49:15 crc kubenswrapper[4919]: I0109 13:49:15.659704 4919 generic.go:334] "Generic (PLEG): container finished" podID="cb00094e-7c2c-45a0-b671-5d62017a9949" containerID="f172beb8557aba10f75eab2d763cd23e6975146759bb22c52abbcd7b15cc1f89" exitCode=0 Jan 09 13:49:15 crc kubenswrapper[4919]: I0109 13:49:15.659810 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fvgzp" event={"ID":"cb00094e-7c2c-45a0-b671-5d62017a9949","Type":"ContainerDied","Data":"f172beb8557aba10f75eab2d763cd23e6975146759bb22c52abbcd7b15cc1f89"} Jan 09 13:49:15 crc kubenswrapper[4919]: I0109 13:49:15.660028 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fvgzp" event={"ID":"cb00094e-7c2c-45a0-b671-5d62017a9949","Type":"ContainerStarted","Data":"7dd244fce2d78fb88aa03cb09d3aea7451e5a1ad738d5c03c9608ce061760c97"} Jan 09 13:49:16 crc kubenswrapper[4919]: I0109 13:49:16.214563 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n9g6d" podUID="088a3f18-0aab-4042-b674-752c23ed3ac3" containerName="ovn-controller" probeResult="failure" output=< Jan 09 13:49:16 crc kubenswrapper[4919]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 09 13:49:16 crc kubenswrapper[4919]: > Jan 09 13:49:18 crc kubenswrapper[4919]: I0109 13:49:18.809813 4919 generic.go:334] "Generic (PLEG): container finished" podID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" containerID="589d5a36f7cf41ba69a03c03f167fb5b087bd8d2e6a305c6bf38d6413aeba7b7" exitCode=0 Jan 09 13:49:18 crc kubenswrapper[4919]: I0109 13:49:18.809941 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba39e0c2-1804-45a7-9dd1-2c20f229b648","Type":"ContainerDied","Data":"589d5a36f7cf41ba69a03c03f167fb5b087bd8d2e6a305c6bf38d6413aeba7b7"} Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.161874 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n9g6d" podUID="088a3f18-0aab-4042-b674-752c23ed3ac3" containerName="ovn-controller" probeResult="failure" output=< Jan 09 13:49:21 crc kubenswrapper[4919]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 09 13:49:21 crc kubenswrapper[4919]: > Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.166966 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.173134 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rrsng" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.246631 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.246689 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.498951 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n9g6d-config-5w7wb"] Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.500770 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.504613 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.514106 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n9g6d-config-5w7wb"] Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.546958 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-additional-scripts\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.547613 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-scripts\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.547796 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-log-ovn\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.547886 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xks5v\" (UniqueName: \"kubernetes.io/projected/d8974139-1677-44f6-beb1-319ac09de22c-kube-api-access-xks5v\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.547919 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run-ovn\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.547944 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.648821 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-additional-scripts\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.648867 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-scripts\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.648896 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-log-ovn\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.648938 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xks5v\" (UniqueName: \"kubernetes.io/projected/d8974139-1677-44f6-beb1-319ac09de22c-kube-api-access-xks5v\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.648961 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run-ovn\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.648981 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.649255 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-log-ovn\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.649369 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run-ovn\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.651960 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.653004 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-scripts\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.653568 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-additional-scripts\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.740414 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xks5v\" (UniqueName: \"kubernetes.io/projected/d8974139-1677-44f6-beb1-319ac09de22c-kube-api-access-xks5v\") pod \"ovn-controller-n9g6d-config-5w7wb\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:21 crc kubenswrapper[4919]: I0109 13:49:21.848416 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:25 crc kubenswrapper[4919]: I0109 13:49:25.688656 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:49:25 crc kubenswrapper[4919]: I0109 13:49:25.696053 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f55583f6-0518-4977-89a9-e4f12b0eae89-etc-swift\") pod \"swift-storage-0\" (UID: \"f55583f6-0518-4977-89a9-e4f12b0eae89\") " pod="openstack/swift-storage-0" Jan 09 13:49:25 crc kubenswrapper[4919]: I0109 13:49:25.840978 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 09 13:49:26 crc kubenswrapper[4919]: I0109 13:49:26.280014 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n9g6d" podUID="088a3f18-0aab-4042-b674-752c23ed3ac3" containerName="ovn-controller" probeResult="failure" output=< Jan 09 13:49:26 crc kubenswrapper[4919]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 09 13:49:26 crc kubenswrapper[4919]: > Jan 09 13:49:27 crc kubenswrapper[4919]: E0109 13:49:27.541754 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Jan 09 13:49:27 crc kubenswrapper[4919]: E0109 13:49:27.542181 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpmnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-wsjrx_openstack(15490209-86af-4f77-8103-27d097279b7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:49:27 crc kubenswrapper[4919]: E0109 13:49:27.543695 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-wsjrx" podUID="15490209-86af-4f77-8103-27d097279b7d" Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.664017 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fvgzp" Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.931850 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb00094e-7c2c-45a0-b671-5d62017a9949-operator-scripts\") pod \"cb00094e-7c2c-45a0-b671-5d62017a9949\" (UID: \"cb00094e-7c2c-45a0-b671-5d62017a9949\") " Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.931993 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzbk8\" (UniqueName: \"kubernetes.io/projected/cb00094e-7c2c-45a0-b671-5d62017a9949-kube-api-access-rzbk8\") pod \"cb00094e-7c2c-45a0-b671-5d62017a9949\" (UID: \"cb00094e-7c2c-45a0-b671-5d62017a9949\") " Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.932846 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb00094e-7c2c-45a0-b671-5d62017a9949-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb00094e-7c2c-45a0-b671-5d62017a9949" (UID: "cb00094e-7c2c-45a0-b671-5d62017a9949"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.966952 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb00094e-7c2c-45a0-b671-5d62017a9949-kube-api-access-rzbk8" (OuterVolumeSpecName: "kube-api-access-rzbk8") pod "cb00094e-7c2c-45a0-b671-5d62017a9949" (UID: "cb00094e-7c2c-45a0-b671-5d62017a9949"). InnerVolumeSpecName "kube-api-access-rzbk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.992559 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fvgzp" event={"ID":"cb00094e-7c2c-45a0-b671-5d62017a9949","Type":"ContainerDied","Data":"7dd244fce2d78fb88aa03cb09d3aea7451e5a1ad738d5c03c9608ce061760c97"} Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.992600 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dd244fce2d78fb88aa03cb09d3aea7451e5a1ad738d5c03c9608ce061760c97" Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.992702 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fvgzp" Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.994533 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba39e0c2-1804-45a7-9dd1-2c20f229b648","Type":"ContainerStarted","Data":"222f92d12f874e3171295a1be715ff54bd117d9c257390ea33e6a0a69878ed79"} Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.994767 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.996783 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b80a84d-c869-407b-b3d2-3be828183ae5","Type":"ContainerStarted","Data":"ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c"} Jan 09 13:49:27 crc kubenswrapper[4919]: I0109 13:49:27.997104 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:49:27 crc kubenswrapper[4919]: E0109 13:49:27.998038 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-wsjrx" podUID="15490209-86af-4f77-8103-27d097279b7d" Jan 09 13:49:28 crc kubenswrapper[4919]: I0109 13:49:28.027881 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371953.826912 podStartE2EDuration="1m23.027864253s" podCreationTimestamp="2026-01-09 13:48:05 +0000 UTC" firstStartedPulling="2026-01-09 13:48:08.372055127 +0000 UTC m=+1067.919894577" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:49:28.020764057 +0000 UTC m=+1147.568603497" watchObservedRunningTime="2026-01-09 13:49:28.027864253 +0000 UTC m=+1147.575703703" Jan 09 13:49:28 crc kubenswrapper[4919]: I0109 13:49:28.041937 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzbk8\" (UniqueName: \"kubernetes.io/projected/cb00094e-7c2c-45a0-b671-5d62017a9949-kube-api-access-rzbk8\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:28 crc kubenswrapper[4919]: I0109 13:49:28.041971 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb00094e-7c2c-45a0-b671-5d62017a9949-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:28 crc kubenswrapper[4919]: I0109 13:49:28.075495 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n9g6d-config-5w7wb"] Jan 09 13:49:28 crc kubenswrapper[4919]: W0109 13:49:28.080409 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8974139_1677_44f6_beb1_319ac09de22c.slice/crio-eb9de9dd34080163e6738950155fa5091f87c829277fc9328c645a459a79cb7a WatchSource:0}: Error finding container eb9de9dd34080163e6738950155fa5091f87c829277fc9328c645a459a79cb7a: Status 404 returned error can't find the container with id eb9de9dd34080163e6738950155fa5091f87c829277fc9328c645a459a79cb7a Jan 09 13:49:28 crc kubenswrapper[4919]: I0109 13:49:28.250393 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.503987132 podStartE2EDuration="1m22.250370974s" podCreationTimestamp="2026-01-09 13:48:06 +0000 UTC" firstStartedPulling="2026-01-09 13:48:08.796969824 +0000 UTC m=+1068.344809274" lastFinishedPulling="2026-01-09 13:48:39.543353656 +0000 UTC m=+1099.091193116" observedRunningTime="2026-01-09 13:49:28.099721349 +0000 UTC m=+1147.647560819" watchObservedRunningTime="2026-01-09 13:49:28.250370974 +0000 UTC m=+1147.798210434" Jan 09 13:49:28 crc kubenswrapper[4919]: I0109 13:49:28.251320 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 09 13:49:28 crc kubenswrapper[4919]: W0109 13:49:28.261445 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf55583f6_0518_4977_89a9_e4f12b0eae89.slice/crio-5c8f6fc62de6b4484ab2b640120d626a2a46b88e69d2a52dcf804f43c0817e31 WatchSource:0}: Error finding container 5c8f6fc62de6b4484ab2b640120d626a2a46b88e69d2a52dcf804f43c0817e31: Status 404 returned error can't find the container with id 5c8f6fc62de6b4484ab2b640120d626a2a46b88e69d2a52dcf804f43c0817e31 Jan 09 13:49:29 crc kubenswrapper[4919]: I0109 13:49:29.005537 4919 generic.go:334] "Generic (PLEG): container finished" podID="d8974139-1677-44f6-beb1-319ac09de22c" containerID="8e62e07d8838b74d93dec6dfc9405411a380785641ee444eba17e70aa209a104" exitCode=0 Jan 09 13:49:29 crc kubenswrapper[4919]: I0109 13:49:29.005718 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9g6d-config-5w7wb" event={"ID":"d8974139-1677-44f6-beb1-319ac09de22c","Type":"ContainerDied","Data":"8e62e07d8838b74d93dec6dfc9405411a380785641ee444eba17e70aa209a104"} Jan 09 13:49:29 crc kubenswrapper[4919]: I0109 13:49:29.005885 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9g6d-config-5w7wb" event={"ID":"d8974139-1677-44f6-beb1-319ac09de22c","Type":"ContainerStarted","Data":"eb9de9dd34080163e6738950155fa5091f87c829277fc9328c645a459a79cb7a"} Jan 09 13:49:29 crc kubenswrapper[4919]: I0109 13:49:29.007038 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"5c8f6fc62de6b4484ab2b640120d626a2a46b88e69d2a52dcf804f43c0817e31"} Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.015531 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"aba19e11fd1466d09f0c83d586e04b0d7c57dca9933cb1afc785f896b3d21562"} Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.370642 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.483619 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-additional-scripts\") pod \"d8974139-1677-44f6-beb1-319ac09de22c\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.483774 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-scripts\") pod \"d8974139-1677-44f6-beb1-319ac09de22c\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.483878 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run-ovn\") pod \"d8974139-1677-44f6-beb1-319ac09de22c\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.483922 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xks5v\" (UniqueName: \"kubernetes.io/projected/d8974139-1677-44f6-beb1-319ac09de22c-kube-api-access-xks5v\") pod \"d8974139-1677-44f6-beb1-319ac09de22c\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.483980 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-log-ovn\") pod \"d8974139-1677-44f6-beb1-319ac09de22c\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.484029 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run\") pod \"d8974139-1677-44f6-beb1-319ac09de22c\" (UID: \"d8974139-1677-44f6-beb1-319ac09de22c\") " Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.484018 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d8974139-1677-44f6-beb1-319ac09de22c" (UID: "d8974139-1677-44f6-beb1-319ac09de22c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.484408 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d8974139-1677-44f6-beb1-319ac09de22c" (UID: "d8974139-1677-44f6-beb1-319ac09de22c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.484441 4919 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.484473 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d8974139-1677-44f6-beb1-319ac09de22c" (UID: "d8974139-1677-44f6-beb1-319ac09de22c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.484482 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run" (OuterVolumeSpecName: "var-run") pod "d8974139-1677-44f6-beb1-319ac09de22c" (UID: "d8974139-1677-44f6-beb1-319ac09de22c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.484812 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-scripts" (OuterVolumeSpecName: "scripts") pod "d8974139-1677-44f6-beb1-319ac09de22c" (UID: "d8974139-1677-44f6-beb1-319ac09de22c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.508389 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8974139-1677-44f6-beb1-319ac09de22c-kube-api-access-xks5v" (OuterVolumeSpecName: "kube-api-access-xks5v") pod "d8974139-1677-44f6-beb1-319ac09de22c" (UID: "d8974139-1677-44f6-beb1-319ac09de22c"). InnerVolumeSpecName "kube-api-access-xks5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.586426 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.586465 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xks5v\" (UniqueName: \"kubernetes.io/projected/d8974139-1677-44f6-beb1-319ac09de22c-kube-api-access-xks5v\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.586480 4919 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.586495 4919 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8974139-1677-44f6-beb1-319ac09de22c-var-run\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:30 crc kubenswrapper[4919]: I0109 13:49:30.586504 4919 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d8974139-1677-44f6-beb1-319ac09de22c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.025700 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9g6d-config-5w7wb" event={"ID":"d8974139-1677-44f6-beb1-319ac09de22c","Type":"ContainerDied","Data":"eb9de9dd34080163e6738950155fa5091f87c829277fc9328c645a459a79cb7a"} Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.025748 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb9de9dd34080163e6738950155fa5091f87c829277fc9328c645a459a79cb7a" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.025785 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9g6d-config-5w7wb" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.193551 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-n9g6d" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.532202 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n9g6d-config-5w7wb"] Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.536735 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n9g6d-config-5w7wb"] Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.648651 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n9g6d-config-5pbvk"] Jan 09 13:49:31 crc kubenswrapper[4919]: E0109 13:49:31.649084 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8974139-1677-44f6-beb1-319ac09de22c" containerName="ovn-config" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.649113 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8974139-1677-44f6-beb1-319ac09de22c" containerName="ovn-config" Jan 09 13:49:31 crc kubenswrapper[4919]: E0109 13:49:31.649156 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb00094e-7c2c-45a0-b671-5d62017a9949" containerName="mariadb-account-create-update" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.649166 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb00094e-7c2c-45a0-b671-5d62017a9949" containerName="mariadb-account-create-update" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.649380 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8974139-1677-44f6-beb1-319ac09de22c" containerName="ovn-config" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.649410 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb00094e-7c2c-45a0-b671-5d62017a9949" containerName="mariadb-account-create-update" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.650113 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.652186 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.672754 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n9g6d-config-5pbvk"] Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.799493 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-log-ovn\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.799542 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfc96\" (UniqueName: \"kubernetes.io/projected/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-kube-api-access-xfc96\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.799570 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.799895 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-scripts\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.800027 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-additional-scripts\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.800083 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run-ovn\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.902157 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-log-ovn\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.902237 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfc96\" (UniqueName: \"kubernetes.io/projected/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-kube-api-access-xfc96\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.902278 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.902370 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-scripts\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.902602 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-additional-scripts\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.902625 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run-ovn\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.902721 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-log-ovn\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.902934 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run-ovn\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.903281 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.903946 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-additional-scripts\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.905054 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-scripts\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:31 crc kubenswrapper[4919]: I0109 13:49:31.925347 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfc96\" (UniqueName: \"kubernetes.io/projected/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-kube-api-access-xfc96\") pod \"ovn-controller-n9g6d-config-5pbvk\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:32 crc kubenswrapper[4919]: I0109 13:49:32.039958 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:32 crc kubenswrapper[4919]: I0109 13:49:32.041591 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"276a82f31f66eea01f985cc45426a61b361d0594fcd6d28a9043cd53bfc655f3"} Jan 09 13:49:32 crc kubenswrapper[4919]: I0109 13:49:32.041635 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"4dee2a61f947dfc7b75b1cf41a06bcd645538cd54131df98d342e35c735775b7"} Jan 09 13:49:32 crc kubenswrapper[4919]: I0109 13:49:32.041648 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"99a315463d2e09bda5441fd2dbfc4366df572c6ad7422c5d205b83a20eaa7760"} Jan 09 13:49:32 crc kubenswrapper[4919]: I0109 13:49:32.633768 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n9g6d-config-5pbvk"] Jan 09 13:49:32 crc kubenswrapper[4919]: I0109 13:49:32.762479 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8974139-1677-44f6-beb1-319ac09de22c" path="/var/lib/kubelet/pods/d8974139-1677-44f6-beb1-319ac09de22c/volumes" Jan 09 13:49:33 crc kubenswrapper[4919]: I0109 13:49:33.059256 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9g6d-config-5pbvk" event={"ID":"0fbef93e-7b75-4c01-9eeb-26daab06a8ea","Type":"ContainerStarted","Data":"9b4bf8860f5cb457909f82090e3d63fcb67c2582085ca1701ed06c3c8bc226cb"} Jan 09 13:49:34 crc kubenswrapper[4919]: I0109 13:49:34.132925 4919 generic.go:334] "Generic (PLEG): container finished" podID="0fbef93e-7b75-4c01-9eeb-26daab06a8ea" containerID="ebc3eab3b4b440ac2f45579817c76128a2c10aaf7855792351571edea4bc8f19" exitCode=0 Jan 09 13:49:34 crc kubenswrapper[4919]: I0109 13:49:34.134541 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9g6d-config-5pbvk" event={"ID":"0fbef93e-7b75-4c01-9eeb-26daab06a8ea","Type":"ContainerDied","Data":"ebc3eab3b4b440ac2f45579817c76128a2c10aaf7855792351571edea4bc8f19"} Jan 09 13:49:34 crc kubenswrapper[4919]: I0109 13:49:34.137718 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"b2c666a4f6a3d4d92e92e6f7ae9ead284e3591fe2a2b537389477a2b4c2ba8d6"} Jan 09 13:49:34 crc kubenswrapper[4919]: I0109 13:49:34.137754 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"f1176376db58f4bf88be561c9a1d2cb1b7a0762b730a302941f54e22c9f22e3b"} Jan 09 13:49:34 crc kubenswrapper[4919]: I0109 13:49:34.137766 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"ed67a2fc91d514de495c5559eb5a594916b0e2fab1afdaacd743632874572f87"} Jan 09 13:49:35 crc kubenswrapper[4919]: I0109 13:49:35.155723 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"c1ab02c83cf344690541f5f99fc9164418b693650e989564dbfb974416a1ba87"} Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.477892 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.596825 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-scripts\") pod \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.597125 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run-ovn\") pod \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.597231 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfc96\" (UniqueName: \"kubernetes.io/projected/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-kube-api-access-xfc96\") pod \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.597273 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-additional-scripts\") pod \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.597368 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-log-ovn\") pod \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.597410 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run\") pod \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\" (UID: \"0fbef93e-7b75-4c01-9eeb-26daab06a8ea\") " Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.597354 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0fbef93e-7b75-4c01-9eeb-26daab06a8ea" (UID: "0fbef93e-7b75-4c01-9eeb-26daab06a8ea"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.597628 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0fbef93e-7b75-4c01-9eeb-26daab06a8ea" (UID: "0fbef93e-7b75-4c01-9eeb-26daab06a8ea"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.597815 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run" (OuterVolumeSpecName: "var-run") pod "0fbef93e-7b75-4c01-9eeb-26daab06a8ea" (UID: "0fbef93e-7b75-4c01-9eeb-26daab06a8ea"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.598198 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0fbef93e-7b75-4c01-9eeb-26daab06a8ea" (UID: "0fbef93e-7b75-4c01-9eeb-26daab06a8ea"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.599530 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-scripts" (OuterVolumeSpecName: "scripts") pod "0fbef93e-7b75-4c01-9eeb-26daab06a8ea" (UID: "0fbef93e-7b75-4c01-9eeb-26daab06a8ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.607272 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-kube-api-access-xfc96" (OuterVolumeSpecName: "kube-api-access-xfc96") pod "0fbef93e-7b75-4c01-9eeb-26daab06a8ea" (UID: "0fbef93e-7b75-4c01-9eeb-26daab06a8ea"). InnerVolumeSpecName "kube-api-access-xfc96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.699090 4919 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.699397 4919 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.699512 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.699654 4919 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.699794 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfc96\" (UniqueName: \"kubernetes.io/projected/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-kube-api-access-xfc96\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:36 crc kubenswrapper[4919]: I0109 13:49:36.699921 4919 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0fbef93e-7b75-4c01-9eeb-26daab06a8ea-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:37 crc kubenswrapper[4919]: I0109 13:49:37.220160 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"f2698925734d47aba6926259bb6e3f8c6574a6e502512e302740a6f91963e34f"} Jan 09 13:49:37 crc kubenswrapper[4919]: I0109 13:49:37.220227 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"50dbc9cd704db04921f84ad84bf578b54106d62d3082f4a5705f9c2229960fee"} Jan 09 13:49:37 crc kubenswrapper[4919]: I0109 13:49:37.220240 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"5ed7d98829e8f09826a17cc0f2e81e44f2fd3036876cfae18ebc721d30eb9e58"} Jan 09 13:49:37 crc kubenswrapper[4919]: I0109 13:49:37.222620 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9g6d-config-5pbvk" event={"ID":"0fbef93e-7b75-4c01-9eeb-26daab06a8ea","Type":"ContainerDied","Data":"9b4bf8860f5cb457909f82090e3d63fcb67c2582085ca1701ed06c3c8bc226cb"} Jan 09 13:49:37 crc kubenswrapper[4919]: I0109 13:49:37.222942 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b4bf8860f5cb457909f82090e3d63fcb67c2582085ca1701ed06c3c8bc226cb" Jan 09 13:49:37 crc kubenswrapper[4919]: I0109 13:49:37.222665 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9g6d-config-5pbvk" Jan 09 13:49:37 crc kubenswrapper[4919]: I0109 13:49:37.649613 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 09 13:49:37 crc kubenswrapper[4919]: I0109 13:49:37.665935 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n9g6d-config-5pbvk"] Jan 09 13:49:37 crc kubenswrapper[4919]: I0109 13:49:37.677091 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n9g6d-config-5pbvk"] Jan 09 13:49:37 crc kubenswrapper[4919]: I0109 13:49:37.883514 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9b80a84d-c869-407b-b3d2-3be828183ae5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 09 13:49:38 crc kubenswrapper[4919]: I0109 13:49:38.236544 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"3bbb37227e23d5f50f7f5a6934d8de3da6868f94d8cbdd4115decaa84c29445a"} Jan 09 13:49:38 crc kubenswrapper[4919]: I0109 13:49:38.759910 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fbef93e-7b75-4c01-9eeb-26daab06a8ea" path="/var/lib/kubelet/pods/0fbef93e-7b75-4c01-9eeb-26daab06a8ea/volumes" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.251828 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"d6f7a8bfd8023b95ec01b320aef4df52b655883464747d09a3e0b3eb751f4a94"} Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.252055 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"b889f6d51aebcbca3ada2344e35387591c95d595e8a26bba83fd9090e8536f2a"} Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.252066 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f55583f6-0518-4977-89a9-e4f12b0eae89","Type":"ContainerStarted","Data":"8151ae5480a722fbf8cc6c95c1dad1c48e6a01859110b496dc19e22a79e5925e"} Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.287436 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.115214079 podStartE2EDuration="47.287409128s" podCreationTimestamp="2026-01-09 13:48:52 +0000 UTC" firstStartedPulling="2026-01-09 13:49:28.264128726 +0000 UTC m=+1147.811968176" lastFinishedPulling="2026-01-09 13:49:36.436323775 +0000 UTC m=+1155.984163225" observedRunningTime="2026-01-09 13:49:39.280176498 +0000 UTC m=+1158.828015948" watchObservedRunningTime="2026-01-09 13:49:39.287409128 +0000 UTC m=+1158.835248578" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.681468 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vrffj"] Jan 09 13:49:39 crc kubenswrapper[4919]: E0109 13:49:39.681830 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbef93e-7b75-4c01-9eeb-26daab06a8ea" containerName="ovn-config" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.681841 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbef93e-7b75-4c01-9eeb-26daab06a8ea" containerName="ovn-config" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.681995 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fbef93e-7b75-4c01-9eeb-26daab06a8ea" containerName="ovn-config" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.682964 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.685547 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.701766 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vrffj"] Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.864243 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.864509 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.864597 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-svc\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.864654 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.864760 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqqf6\" (UniqueName: \"kubernetes.io/projected/25ed4749-2919-40f3-a657-04e4b8b0cd84-kube-api-access-gqqf6\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.864844 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-config\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.965860 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-svc\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.965935 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.965977 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqqf6\" (UniqueName: \"kubernetes.io/projected/25ed4749-2919-40f3-a657-04e4b8b0cd84-kube-api-access-gqqf6\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.966008 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-config\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.966030 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.966102 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.967094 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.967471 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.967505 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.967550 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-config\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.967799 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-svc\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:39 crc kubenswrapper[4919]: I0109 13:49:39.996176 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqqf6\" (UniqueName: \"kubernetes.io/projected/25ed4749-2919-40f3-a657-04e4b8b0cd84-kube-api-access-gqqf6\") pod \"dnsmasq-dns-8db84466c-vrffj\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:40 crc kubenswrapper[4919]: I0109 13:49:40.002487 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:40 crc kubenswrapper[4919]: I0109 13:49:40.486384 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vrffj"] Jan 09 13:49:41 crc kubenswrapper[4919]: I0109 13:49:41.272866 4919 generic.go:334] "Generic (PLEG): container finished" podID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerID="c8b9fa4b4f7a0e9b3f58cee458b407a2d96051c3ac1f4524214464f0215ec602" exitCode=0 Jan 09 13:49:41 crc kubenswrapper[4919]: I0109 13:49:41.272929 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-vrffj" event={"ID":"25ed4749-2919-40f3-a657-04e4b8b0cd84","Type":"ContainerDied","Data":"c8b9fa4b4f7a0e9b3f58cee458b407a2d96051c3ac1f4524214464f0215ec602"} Jan 09 13:49:41 crc kubenswrapper[4919]: I0109 13:49:41.273454 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-vrffj" event={"ID":"25ed4749-2919-40f3-a657-04e4b8b0cd84","Type":"ContainerStarted","Data":"bb9ccc25886262c63e9b18526cea6064f6a88939cf5accd426fb3e30c7965697"} Jan 09 13:49:42 crc kubenswrapper[4919]: I0109 13:49:42.283938 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-vrffj" event={"ID":"25ed4749-2919-40f3-a657-04e4b8b0cd84","Type":"ContainerStarted","Data":"ecadf01ef5df33a2077f08ed4448fff95ed133c21cd7cbb4d46eaf8cab7207be"} Jan 09 13:49:42 crc kubenswrapper[4919]: I0109 13:49:42.284311 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:42 crc kubenswrapper[4919]: I0109 13:49:42.286640 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wsjrx" event={"ID":"15490209-86af-4f77-8103-27d097279b7d","Type":"ContainerStarted","Data":"dbd2a2bdc58241cb9be3f331afbd1f293e628991b55d61b9e0aca7e77ad79ad0"} Jan 09 13:49:42 crc kubenswrapper[4919]: I0109 13:49:42.304802 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db84466c-vrffj" podStartSLOduration=3.3047815050000002 podStartE2EDuration="3.304781505s" podCreationTimestamp="2026-01-09 13:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:49:42.302615401 +0000 UTC m=+1161.850454861" watchObservedRunningTime="2026-01-09 13:49:42.304781505 +0000 UTC m=+1161.852620955" Jan 09 13:49:42 crc kubenswrapper[4919]: I0109 13:49:42.334287 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wsjrx" podStartSLOduration=2.162732674 podStartE2EDuration="36.334262688s" podCreationTimestamp="2026-01-09 13:49:06 +0000 UTC" firstStartedPulling="2026-01-09 13:49:07.113369399 +0000 UTC m=+1126.661208849" lastFinishedPulling="2026-01-09 13:49:41.284899413 +0000 UTC m=+1160.832738863" observedRunningTime="2026-01-09 13:49:42.323599323 +0000 UTC m=+1161.871438773" watchObservedRunningTime="2026-01-09 13:49:42.334262688 +0000 UTC m=+1161.882102158" Jan 09 13:49:47 crc kubenswrapper[4919]: I0109 13:49:47.644568 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 09 13:49:47 crc kubenswrapper[4919]: I0109 13:49:47.867399 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:49:47 crc kubenswrapper[4919]: I0109 13:49:47.920015 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-22wxq"] Jan 09 13:49:47 crc kubenswrapper[4919]: I0109 13:49:47.921120 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-22wxq" Jan 09 13:49:47 crc kubenswrapper[4919]: I0109 13:49:47.929028 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-22wxq"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.058698 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d4mf\" (UniqueName: \"kubernetes.io/projected/214a0432-7622-45a7-b693-f5aea45623e7-kube-api-access-9d4mf\") pod \"barbican-db-create-22wxq\" (UID: \"214a0432-7622-45a7-b693-f5aea45623e7\") " pod="openstack/barbican-db-create-22wxq" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.058932 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214a0432-7622-45a7-b693-f5aea45623e7-operator-scripts\") pod \"barbican-db-create-22wxq\" (UID: \"214a0432-7622-45a7-b693-f5aea45623e7\") " pod="openstack/barbican-db-create-22wxq" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.124420 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-sl6nb"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.125635 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sl6nb" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.142575 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3d0f-account-create-update-bzj6x"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.144323 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3d0f-account-create-update-bzj6x" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.154464 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.159850 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sl6nb"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.160600 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d4mf\" (UniqueName: \"kubernetes.io/projected/214a0432-7622-45a7-b693-f5aea45623e7-kube-api-access-9d4mf\") pod \"barbican-db-create-22wxq\" (UID: \"214a0432-7622-45a7-b693-f5aea45623e7\") " pod="openstack/barbican-db-create-22wxq" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.160668 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214a0432-7622-45a7-b693-f5aea45623e7-operator-scripts\") pod \"barbican-db-create-22wxq\" (UID: \"214a0432-7622-45a7-b693-f5aea45623e7\") " pod="openstack/barbican-db-create-22wxq" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.161312 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214a0432-7622-45a7-b693-f5aea45623e7-operator-scripts\") pod \"barbican-db-create-22wxq\" (UID: \"214a0432-7622-45a7-b693-f5aea45623e7\") " pod="openstack/barbican-db-create-22wxq" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.187541 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3d0f-account-create-update-bzj6x"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.201954 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d4mf\" (UniqueName: \"kubernetes.io/projected/214a0432-7622-45a7-b693-f5aea45623e7-kube-api-access-9d4mf\") pod \"barbican-db-create-22wxq\" (UID: \"214a0432-7622-45a7-b693-f5aea45623e7\") " pod="openstack/barbican-db-create-22wxq" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.237793 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-22wxq" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.238618 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c29b-account-create-update-whsc2"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.239663 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c29b-account-create-update-whsc2" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.243877 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.262184 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6gvp\" (UniqueName: \"kubernetes.io/projected/167038ac-1986-4fec-ae8e-98807f212a49-kube-api-access-s6gvp\") pod \"cinder-db-create-sl6nb\" (UID: \"167038ac-1986-4fec-ae8e-98807f212a49\") " pod="openstack/cinder-db-create-sl6nb" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.262253 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/167038ac-1986-4fec-ae8e-98807f212a49-operator-scripts\") pod \"cinder-db-create-sl6nb\" (UID: \"167038ac-1986-4fec-ae8e-98807f212a49\") " pod="openstack/cinder-db-create-sl6nb" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.262328 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d480a45-ff2a-4672-bbf4-05a8a397b34a-operator-scripts\") pod \"barbican-3d0f-account-create-update-bzj6x\" (UID: \"9d480a45-ff2a-4672-bbf4-05a8a397b34a\") " pod="openstack/barbican-3d0f-account-create-update-bzj6x" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.262362 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59hw\" (UniqueName: \"kubernetes.io/projected/9d480a45-ff2a-4672-bbf4-05a8a397b34a-kube-api-access-d59hw\") pod \"barbican-3d0f-account-create-update-bzj6x\" (UID: \"9d480a45-ff2a-4672-bbf4-05a8a397b34a\") " pod="openstack/barbican-3d0f-account-create-update-bzj6x" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.277496 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c29b-account-create-update-whsc2"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.364419 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6gvp\" (UniqueName: \"kubernetes.io/projected/167038ac-1986-4fec-ae8e-98807f212a49-kube-api-access-s6gvp\") pod \"cinder-db-create-sl6nb\" (UID: \"167038ac-1986-4fec-ae8e-98807f212a49\") " pod="openstack/cinder-db-create-sl6nb" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.364476 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/167038ac-1986-4fec-ae8e-98807f212a49-operator-scripts\") pod \"cinder-db-create-sl6nb\" (UID: \"167038ac-1986-4fec-ae8e-98807f212a49\") " pod="openstack/cinder-db-create-sl6nb" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.364551 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d480a45-ff2a-4672-bbf4-05a8a397b34a-operator-scripts\") pod \"barbican-3d0f-account-create-update-bzj6x\" (UID: \"9d480a45-ff2a-4672-bbf4-05a8a397b34a\") " pod="openstack/barbican-3d0f-account-create-update-bzj6x" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.364579 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbcdk\" (UniqueName: \"kubernetes.io/projected/48014bf6-50e6-407c-8fca-bd2949ad791c-kube-api-access-cbcdk\") pod \"cinder-c29b-account-create-update-whsc2\" (UID: \"48014bf6-50e6-407c-8fca-bd2949ad791c\") " pod="openstack/cinder-c29b-account-create-update-whsc2" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.364598 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48014bf6-50e6-407c-8fca-bd2949ad791c-operator-scripts\") pod \"cinder-c29b-account-create-update-whsc2\" (UID: \"48014bf6-50e6-407c-8fca-bd2949ad791c\") " pod="openstack/cinder-c29b-account-create-update-whsc2" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.364628 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d59hw\" (UniqueName: \"kubernetes.io/projected/9d480a45-ff2a-4672-bbf4-05a8a397b34a-kube-api-access-d59hw\") pod \"barbican-3d0f-account-create-update-bzj6x\" (UID: \"9d480a45-ff2a-4672-bbf4-05a8a397b34a\") " pod="openstack/barbican-3d0f-account-create-update-bzj6x" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.365376 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/167038ac-1986-4fec-ae8e-98807f212a49-operator-scripts\") pod \"cinder-db-create-sl6nb\" (UID: \"167038ac-1986-4fec-ae8e-98807f212a49\") " pod="openstack/cinder-db-create-sl6nb" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.365853 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d480a45-ff2a-4672-bbf4-05a8a397b34a-operator-scripts\") pod \"barbican-3d0f-account-create-update-bzj6x\" (UID: \"9d480a45-ff2a-4672-bbf4-05a8a397b34a\") " pod="openstack/barbican-3d0f-account-create-update-bzj6x" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.401890 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6gvp\" (UniqueName: \"kubernetes.io/projected/167038ac-1986-4fec-ae8e-98807f212a49-kube-api-access-s6gvp\") pod \"cinder-db-create-sl6nb\" (UID: \"167038ac-1986-4fec-ae8e-98807f212a49\") " pod="openstack/cinder-db-create-sl6nb" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.402316 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d59hw\" (UniqueName: \"kubernetes.io/projected/9d480a45-ff2a-4672-bbf4-05a8a397b34a-kube-api-access-d59hw\") pod \"barbican-3d0f-account-create-update-bzj6x\" (UID: \"9d480a45-ff2a-4672-bbf4-05a8a397b34a\") " pod="openstack/barbican-3d0f-account-create-update-bzj6x" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.443172 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sl6nb" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.461423 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3d0f-account-create-update-bzj6x" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.466176 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbcdk\" (UniqueName: \"kubernetes.io/projected/48014bf6-50e6-407c-8fca-bd2949ad791c-kube-api-access-cbcdk\") pod \"cinder-c29b-account-create-update-whsc2\" (UID: \"48014bf6-50e6-407c-8fca-bd2949ad791c\") " pod="openstack/cinder-c29b-account-create-update-whsc2" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.466252 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48014bf6-50e6-407c-8fca-bd2949ad791c-operator-scripts\") pod \"cinder-c29b-account-create-update-whsc2\" (UID: \"48014bf6-50e6-407c-8fca-bd2949ad791c\") " pod="openstack/cinder-c29b-account-create-update-whsc2" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.467111 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48014bf6-50e6-407c-8fca-bd2949ad791c-operator-scripts\") pod \"cinder-c29b-account-create-update-whsc2\" (UID: \"48014bf6-50e6-407c-8fca-bd2949ad791c\") " pod="openstack/cinder-c29b-account-create-update-whsc2" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.480736 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-kfw6n"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.482108 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kfw6n" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.498850 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbcdk\" (UniqueName: \"kubernetes.io/projected/48014bf6-50e6-407c-8fca-bd2949ad791c-kube-api-access-cbcdk\") pod \"cinder-c29b-account-create-update-whsc2\" (UID: \"48014bf6-50e6-407c-8fca-bd2949ad791c\") " pod="openstack/cinder-c29b-account-create-update-whsc2" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.506333 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kfw6n"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.546594 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-r7hqg"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.547792 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.553484 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.553803 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.554480 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.554683 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7w5b5" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.556444 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0d0f-account-create-update-4tkms"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.557599 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0d0f-account-create-update-4tkms" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.561619 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.576059 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r7hqg"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.601277 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0d0f-account-create-update-4tkms"] Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.615476 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c29b-account-create-update-whsc2" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.669577 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3e0515-9960-40e4-a938-6166810db59e-operator-scripts\") pod \"neutron-0d0f-account-create-update-4tkms\" (UID: \"8a3e0515-9960-40e4-a938-6166810db59e\") " pod="openstack/neutron-0d0f-account-create-update-4tkms" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.669860 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hg8s\" (UniqueName: \"kubernetes.io/projected/e67b8129-4a9a-4459-b85d-45ae30ad425e-kube-api-access-6hg8s\") pod \"neutron-db-create-kfw6n\" (UID: \"e67b8129-4a9a-4459-b85d-45ae30ad425e\") " pod="openstack/neutron-db-create-kfw6n" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.669892 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrvh\" (UniqueName: \"kubernetes.io/projected/8a3e0515-9960-40e4-a938-6166810db59e-kube-api-access-nfrvh\") pod \"neutron-0d0f-account-create-update-4tkms\" (UID: \"8a3e0515-9960-40e4-a938-6166810db59e\") " pod="openstack/neutron-0d0f-account-create-update-4tkms" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.669924 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-combined-ca-bundle\") pod \"keystone-db-sync-r7hqg\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.669981 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e67b8129-4a9a-4459-b85d-45ae30ad425e-operator-scripts\") pod \"neutron-db-create-kfw6n\" (UID: \"e67b8129-4a9a-4459-b85d-45ae30ad425e\") " pod="openstack/neutron-db-create-kfw6n" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.670022 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qrfc\" (UniqueName: \"kubernetes.io/projected/15f289e5-b950-489a-8207-5b340be14c0e-kube-api-access-6qrfc\") pod \"keystone-db-sync-r7hqg\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.670045 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-config-data\") pod \"keystone-db-sync-r7hqg\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.772182 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hg8s\" (UniqueName: \"kubernetes.io/projected/e67b8129-4a9a-4459-b85d-45ae30ad425e-kube-api-access-6hg8s\") pod \"neutron-db-create-kfw6n\" (UID: \"e67b8129-4a9a-4459-b85d-45ae30ad425e\") " pod="openstack/neutron-db-create-kfw6n" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.772241 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfrvh\" (UniqueName: \"kubernetes.io/projected/8a3e0515-9960-40e4-a938-6166810db59e-kube-api-access-nfrvh\") pod \"neutron-0d0f-account-create-update-4tkms\" (UID: \"8a3e0515-9960-40e4-a938-6166810db59e\") " pod="openstack/neutron-0d0f-account-create-update-4tkms" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.772273 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-combined-ca-bundle\") pod \"keystone-db-sync-r7hqg\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.772317 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e67b8129-4a9a-4459-b85d-45ae30ad425e-operator-scripts\") pod \"neutron-db-create-kfw6n\" (UID: \"e67b8129-4a9a-4459-b85d-45ae30ad425e\") " pod="openstack/neutron-db-create-kfw6n" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.772348 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qrfc\" (UniqueName: \"kubernetes.io/projected/15f289e5-b950-489a-8207-5b340be14c0e-kube-api-access-6qrfc\") pod \"keystone-db-sync-r7hqg\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.772376 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-config-data\") pod \"keystone-db-sync-r7hqg\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.772429 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3e0515-9960-40e4-a938-6166810db59e-operator-scripts\") pod \"neutron-0d0f-account-create-update-4tkms\" (UID: \"8a3e0515-9960-40e4-a938-6166810db59e\") " pod="openstack/neutron-0d0f-account-create-update-4tkms" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.773252 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3e0515-9960-40e4-a938-6166810db59e-operator-scripts\") pod \"neutron-0d0f-account-create-update-4tkms\" (UID: \"8a3e0515-9960-40e4-a938-6166810db59e\") " pod="openstack/neutron-0d0f-account-create-update-4tkms" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.773324 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e67b8129-4a9a-4459-b85d-45ae30ad425e-operator-scripts\") pod \"neutron-db-create-kfw6n\" (UID: \"e67b8129-4a9a-4459-b85d-45ae30ad425e\") " pod="openstack/neutron-db-create-kfw6n" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.778836 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-config-data\") pod \"keystone-db-sync-r7hqg\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.781860 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-combined-ca-bundle\") pod \"keystone-db-sync-r7hqg\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.794463 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfrvh\" (UniqueName: \"kubernetes.io/projected/8a3e0515-9960-40e4-a938-6166810db59e-kube-api-access-nfrvh\") pod \"neutron-0d0f-account-create-update-4tkms\" (UID: \"8a3e0515-9960-40e4-a938-6166810db59e\") " pod="openstack/neutron-0d0f-account-create-update-4tkms" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.795835 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qrfc\" (UniqueName: \"kubernetes.io/projected/15f289e5-b950-489a-8207-5b340be14c0e-kube-api-access-6qrfc\") pod \"keystone-db-sync-r7hqg\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.804186 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hg8s\" (UniqueName: \"kubernetes.io/projected/e67b8129-4a9a-4459-b85d-45ae30ad425e-kube-api-access-6hg8s\") pod \"neutron-db-create-kfw6n\" (UID: \"e67b8129-4a9a-4459-b85d-45ae30ad425e\") " pod="openstack/neutron-db-create-kfw6n" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.865925 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kfw6n" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.874599 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:49:48 crc kubenswrapper[4919]: I0109 13:49:48.888196 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0d0f-account-create-update-4tkms" Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.089394 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sl6nb"] Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.150065 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-22wxq"] Jan 09 13:49:49 crc kubenswrapper[4919]: W0109 13:49:49.184465 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod214a0432_7622_45a7_b693_f5aea45623e7.slice/crio-935b20b6af477310e56200b08ce3d5e7a86cb3977ff659ad589b2609072c3be5 WatchSource:0}: Error finding container 935b20b6af477310e56200b08ce3d5e7a86cb3977ff659ad589b2609072c3be5: Status 404 returned error can't find the container with id 935b20b6af477310e56200b08ce3d5e7a86cb3977ff659ad589b2609072c3be5 Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.196634 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c29b-account-create-update-whsc2"] Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.280728 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3d0f-account-create-update-bzj6x"] Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.354176 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3d0f-account-create-update-bzj6x" event={"ID":"9d480a45-ff2a-4672-bbf4-05a8a397b34a","Type":"ContainerStarted","Data":"cb0e0b3893d63d7c64f1bdc79d7c3ecc41d9c7062d05119558a99a914a0479b3"} Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.357147 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c29b-account-create-update-whsc2" event={"ID":"48014bf6-50e6-407c-8fca-bd2949ad791c","Type":"ContainerStarted","Data":"b1c430556fec699713df29b86634ae0e908a12750c5ba1ab804715798d6b4911"} Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.358625 4919 generic.go:334] "Generic (PLEG): container finished" podID="15490209-86af-4f77-8103-27d097279b7d" containerID="dbd2a2bdc58241cb9be3f331afbd1f293e628991b55d61b9e0aca7e77ad79ad0" exitCode=0 Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.358676 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wsjrx" event={"ID":"15490209-86af-4f77-8103-27d097279b7d","Type":"ContainerDied","Data":"dbd2a2bdc58241cb9be3f331afbd1f293e628991b55d61b9e0aca7e77ad79ad0"} Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.365166 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-22wxq" event={"ID":"214a0432-7622-45a7-b693-f5aea45623e7","Type":"ContainerStarted","Data":"935b20b6af477310e56200b08ce3d5e7a86cb3977ff659ad589b2609072c3be5"} Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.379789 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sl6nb" event={"ID":"167038ac-1986-4fec-ae8e-98807f212a49","Type":"ContainerStarted","Data":"c7dba5429fbfebf34291fe84dbb380bd81ee85ba219006eb8fc87950cceba142"} Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.504040 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0d0f-account-create-update-4tkms"] Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.520119 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r7hqg"] Jan 09 13:49:49 crc kubenswrapper[4919]: I0109 13:49:49.607998 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kfw6n"] Jan 09 13:49:49 crc kubenswrapper[4919]: W0109 13:49:49.655008 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode67b8129_4a9a_4459_b85d_45ae30ad425e.slice/crio-82629ff2c25daa0dc00690f13adf4cff26e8cd1ae06da39261bf3cca08a3b7ee WatchSource:0}: Error finding container 82629ff2c25daa0dc00690f13adf4cff26e8cd1ae06da39261bf3cca08a3b7ee: Status 404 returned error can't find the container with id 82629ff2c25daa0dc00690f13adf4cff26e8cd1ae06da39261bf3cca08a3b7ee Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.004468 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.087743 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ptnj7"] Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.087999 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" podUID="34a3604c-a8d7-4927-af88-a99eef3393fd" containerName="dnsmasq-dns" containerID="cri-o://f3efcce0647688716c6cd941ec148e4d290c12981d2a1505de61c5cfc33c840b" gracePeriod=10 Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.389679 4919 generic.go:334] "Generic (PLEG): container finished" podID="9d480a45-ff2a-4672-bbf4-05a8a397b34a" containerID="58cabe75a8d1fd1bd07963b8759e6b024bb1922ce4f4d49a6bf51313bb19676d" exitCode=0 Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.389980 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3d0f-account-create-update-bzj6x" event={"ID":"9d480a45-ff2a-4672-bbf4-05a8a397b34a","Type":"ContainerDied","Data":"58cabe75a8d1fd1bd07963b8759e6b024bb1922ce4f4d49a6bf51313bb19676d"} Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.392296 4919 generic.go:334] "Generic (PLEG): container finished" podID="34a3604c-a8d7-4927-af88-a99eef3393fd" containerID="f3efcce0647688716c6cd941ec148e4d290c12981d2a1505de61c5cfc33c840b" exitCode=0 Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.392344 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" event={"ID":"34a3604c-a8d7-4927-af88-a99eef3393fd","Type":"ContainerDied","Data":"f3efcce0647688716c6cd941ec148e4d290c12981d2a1505de61c5cfc33c840b"} Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.394480 4919 generic.go:334] "Generic (PLEG): container finished" podID="48014bf6-50e6-407c-8fca-bd2949ad791c" containerID="b0fdb1ebe025da04446a4bb7b5505bb79f5bd57e91d575bbd6b63cded83049e8" exitCode=0 Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.394550 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c29b-account-create-update-whsc2" event={"ID":"48014bf6-50e6-407c-8fca-bd2949ad791c","Type":"ContainerDied","Data":"b0fdb1ebe025da04446a4bb7b5505bb79f5bd57e91d575bbd6b63cded83049e8"} Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.395751 4919 generic.go:334] "Generic (PLEG): container finished" podID="e67b8129-4a9a-4459-b85d-45ae30ad425e" containerID="8ddc5dfe271364c57400e27600a828c70e8cfcd4e51a2e2cd5364d9531896ab2" exitCode=0 Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.395787 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kfw6n" event={"ID":"e67b8129-4a9a-4459-b85d-45ae30ad425e","Type":"ContainerDied","Data":"8ddc5dfe271364c57400e27600a828c70e8cfcd4e51a2e2cd5364d9531896ab2"} Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.395803 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kfw6n" event={"ID":"e67b8129-4a9a-4459-b85d-45ae30ad425e","Type":"ContainerStarted","Data":"82629ff2c25daa0dc00690f13adf4cff26e8cd1ae06da39261bf3cca08a3b7ee"} Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.396711 4919 generic.go:334] "Generic (PLEG): container finished" podID="8a3e0515-9960-40e4-a938-6166810db59e" containerID="f5a76717a74b6910494ded40567c72acf3a245e327888a7d6e4c26c180117995" exitCode=0 Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.396746 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0d0f-account-create-update-4tkms" event={"ID":"8a3e0515-9960-40e4-a938-6166810db59e","Type":"ContainerDied","Data":"f5a76717a74b6910494ded40567c72acf3a245e327888a7d6e4c26c180117995"} Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.396762 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0d0f-account-create-update-4tkms" event={"ID":"8a3e0515-9960-40e4-a938-6166810db59e","Type":"ContainerStarted","Data":"58e93a8fbd0576eb054e2f13c1db8567dd45b3201b1afd83eb616eaa987bdc19"} Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.405956 4919 generic.go:334] "Generic (PLEG): container finished" podID="214a0432-7622-45a7-b693-f5aea45623e7" containerID="8c19d822b0f883e5ca46e5d101df09d756d76d8ac4747c87e95d6d872ee8302a" exitCode=0 Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.406032 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-22wxq" event={"ID":"214a0432-7622-45a7-b693-f5aea45623e7","Type":"ContainerDied","Data":"8c19d822b0f883e5ca46e5d101df09d756d76d8ac4747c87e95d6d872ee8302a"} Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.407730 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7hqg" event={"ID":"15f289e5-b950-489a-8207-5b340be14c0e","Type":"ContainerStarted","Data":"c5e38d04c225aedf2f85967a97c12c589376e2eb414b1b0967d03fa5ef481e5e"} Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.415134 4919 generic.go:334] "Generic (PLEG): container finished" podID="167038ac-1986-4fec-ae8e-98807f212a49" containerID="32e59b2af8a5c14138c785f272069d098c006143a8cfb7c0136747704e10696b" exitCode=0 Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.415325 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sl6nb" event={"ID":"167038ac-1986-4fec-ae8e-98807f212a49","Type":"ContainerDied","Data":"32e59b2af8a5c14138c785f272069d098c006143a8cfb7c0136747704e10696b"} Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.625989 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.718148 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-dns-svc\") pod \"34a3604c-a8d7-4927-af88-a99eef3393fd\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.718223 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g4tp\" (UniqueName: \"kubernetes.io/projected/34a3604c-a8d7-4927-af88-a99eef3393fd-kube-api-access-5g4tp\") pod \"34a3604c-a8d7-4927-af88-a99eef3393fd\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.718315 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-sb\") pod \"34a3604c-a8d7-4927-af88-a99eef3393fd\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.718373 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-nb\") pod \"34a3604c-a8d7-4927-af88-a99eef3393fd\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.718444 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-config\") pod \"34a3604c-a8d7-4927-af88-a99eef3393fd\" (UID: \"34a3604c-a8d7-4927-af88-a99eef3393fd\") " Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.727142 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a3604c-a8d7-4927-af88-a99eef3393fd-kube-api-access-5g4tp" (OuterVolumeSpecName: "kube-api-access-5g4tp") pod "34a3604c-a8d7-4927-af88-a99eef3393fd" (UID: "34a3604c-a8d7-4927-af88-a99eef3393fd"). InnerVolumeSpecName "kube-api-access-5g4tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.788482 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-config" (OuterVolumeSpecName: "config") pod "34a3604c-a8d7-4927-af88-a99eef3393fd" (UID: "34a3604c-a8d7-4927-af88-a99eef3393fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.789008 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "34a3604c-a8d7-4927-af88-a99eef3393fd" (UID: "34a3604c-a8d7-4927-af88-a99eef3393fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.794774 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34a3604c-a8d7-4927-af88-a99eef3393fd" (UID: "34a3604c-a8d7-4927-af88-a99eef3393fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.820539 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.820580 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g4tp\" (UniqueName: \"kubernetes.io/projected/34a3604c-a8d7-4927-af88-a99eef3393fd-kube-api-access-5g4tp\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.820593 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.820610 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.826389 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "34a3604c-a8d7-4927-af88-a99eef3393fd" (UID: "34a3604c-a8d7-4927-af88-a99eef3393fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.862804 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:50 crc kubenswrapper[4919]: I0109 13:49:50.921770 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34a3604c-a8d7-4927-af88-a99eef3393fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.022817 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpmnj\" (UniqueName: \"kubernetes.io/projected/15490209-86af-4f77-8103-27d097279b7d-kube-api-access-rpmnj\") pod \"15490209-86af-4f77-8103-27d097279b7d\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.022876 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-db-sync-config-data\") pod \"15490209-86af-4f77-8103-27d097279b7d\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.022918 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-config-data\") pod \"15490209-86af-4f77-8103-27d097279b7d\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.023022 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-combined-ca-bundle\") pod \"15490209-86af-4f77-8103-27d097279b7d\" (UID: \"15490209-86af-4f77-8103-27d097279b7d\") " Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.027397 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15490209-86af-4f77-8103-27d097279b7d-kube-api-access-rpmnj" (OuterVolumeSpecName: "kube-api-access-rpmnj") pod "15490209-86af-4f77-8103-27d097279b7d" (UID: "15490209-86af-4f77-8103-27d097279b7d"). InnerVolumeSpecName "kube-api-access-rpmnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.027398 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "15490209-86af-4f77-8103-27d097279b7d" (UID: "15490209-86af-4f77-8103-27d097279b7d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.045256 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15490209-86af-4f77-8103-27d097279b7d" (UID: "15490209-86af-4f77-8103-27d097279b7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.062735 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-config-data" (OuterVolumeSpecName: "config-data") pod "15490209-86af-4f77-8103-27d097279b7d" (UID: "15490209-86af-4f77-8103-27d097279b7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.128767 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.128821 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpmnj\" (UniqueName: \"kubernetes.io/projected/15490209-86af-4f77-8103-27d097279b7d-kube-api-access-rpmnj\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.128840 4919 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.128858 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15490209-86af-4f77-8103-27d097279b7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.247112 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.247165 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.429886 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wsjrx" event={"ID":"15490209-86af-4f77-8103-27d097279b7d","Type":"ContainerDied","Data":"ff0ea4e9fde8557155a1b211da65fb8be66cbf84d15a378962d6bf4fdd200ec2"} Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.430269 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff0ea4e9fde8557155a1b211da65fb8be66cbf84d15a378962d6bf4fdd200ec2" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.429897 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wsjrx" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.442949 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" event={"ID":"34a3604c-a8d7-4927-af88-a99eef3393fd","Type":"ContainerDied","Data":"7e0d520e1c06fb046e85e26addd85d4194290f857f06ccdcd0bff81d46f14ad4"} Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.442997 4919 scope.go:117] "RemoveContainer" containerID="f3efcce0647688716c6cd941ec148e4d290c12981d2a1505de61c5cfc33c840b" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.443783 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ptnj7" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.469174 4919 scope.go:117] "RemoveContainer" containerID="7b68d770aeba345a977e00542b5c3048b272132a8375f0a567002a93a75a06bf" Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.496903 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ptnj7"] Jan 09 13:49:51 crc kubenswrapper[4919]: I0109 13:49:51.505730 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ptnj7"] Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.130616 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cdrjp"] Jan 09 13:49:52 crc kubenswrapper[4919]: E0109 13:49:52.131595 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15490209-86af-4f77-8103-27d097279b7d" containerName="glance-db-sync" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.131616 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="15490209-86af-4f77-8103-27d097279b7d" containerName="glance-db-sync" Jan 09 13:49:52 crc kubenswrapper[4919]: E0109 13:49:52.131646 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a3604c-a8d7-4927-af88-a99eef3393fd" containerName="dnsmasq-dns" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.131654 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a3604c-a8d7-4927-af88-a99eef3393fd" containerName="dnsmasq-dns" Jan 09 13:49:52 crc kubenswrapper[4919]: E0109 13:49:52.131701 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a3604c-a8d7-4927-af88-a99eef3393fd" containerName="init" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.131708 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a3604c-a8d7-4927-af88-a99eef3393fd" containerName="init" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.132237 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="15490209-86af-4f77-8103-27d097279b7d" containerName="glance-db-sync" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.132261 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a3604c-a8d7-4927-af88-a99eef3393fd" containerName="dnsmasq-dns" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.134244 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.149748 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cdrjp"] Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.301193 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.301576 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.301625 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.301655 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.301700 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzkwk\" (UniqueName: \"kubernetes.io/projected/de460845-14e7-4fa3-bc01-4bc4a40b18df-kube-api-access-dzkwk\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.301727 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-config\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.404710 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.404837 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.404915 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzkwk\" (UniqueName: \"kubernetes.io/projected/de460845-14e7-4fa3-bc01-4bc4a40b18df-kube-api-access-dzkwk\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.404953 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-config\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.405018 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.405060 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.405773 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.406397 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.408290 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-config\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.408360 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.408412 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.436538 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzkwk\" (UniqueName: \"kubernetes.io/projected/de460845-14e7-4fa3-bc01-4bc4a40b18df-kube-api-access-dzkwk\") pod \"dnsmasq-dns-74dfc89d77-cdrjp\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.480695 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sl6nb" event={"ID":"167038ac-1986-4fec-ae8e-98807f212a49","Type":"ContainerDied","Data":"c7dba5429fbfebf34291fe84dbb380bd81ee85ba219006eb8fc87950cceba142"} Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.480749 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7dba5429fbfebf34291fe84dbb380bd81ee85ba219006eb8fc87950cceba142" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.528321 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sl6nb" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.546406 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.701732 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c29b-account-create-update-whsc2" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.707737 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0d0f-account-create-update-4tkms" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.718393 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/167038ac-1986-4fec-ae8e-98807f212a49-operator-scripts\") pod \"167038ac-1986-4fec-ae8e-98807f212a49\" (UID: \"167038ac-1986-4fec-ae8e-98807f212a49\") " Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.718470 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6gvp\" (UniqueName: \"kubernetes.io/projected/167038ac-1986-4fec-ae8e-98807f212a49-kube-api-access-s6gvp\") pod \"167038ac-1986-4fec-ae8e-98807f212a49\" (UID: \"167038ac-1986-4fec-ae8e-98807f212a49\") " Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.719594 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/167038ac-1986-4fec-ae8e-98807f212a49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "167038ac-1986-4fec-ae8e-98807f212a49" (UID: "167038ac-1986-4fec-ae8e-98807f212a49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.731990 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167038ac-1986-4fec-ae8e-98807f212a49-kube-api-access-s6gvp" (OuterVolumeSpecName: "kube-api-access-s6gvp") pod "167038ac-1986-4fec-ae8e-98807f212a49" (UID: "167038ac-1986-4fec-ae8e-98807f212a49"). InnerVolumeSpecName "kube-api-access-s6gvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.738077 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3d0f-account-create-update-bzj6x" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.777397 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kfw6n" Jan 09 13:49:52 crc kubenswrapper[4919]: I0109 13:49:52.777766 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-22wxq" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.800377 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a3604c-a8d7-4927-af88-a99eef3393fd" path="/var/lib/kubelet/pods/34a3604c-a8d7-4927-af88-a99eef3393fd/volumes" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.820024 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbcdk\" (UniqueName: \"kubernetes.io/projected/48014bf6-50e6-407c-8fca-bd2949ad791c-kube-api-access-cbcdk\") pod \"48014bf6-50e6-407c-8fca-bd2949ad791c\" (UID: \"48014bf6-50e6-407c-8fca-bd2949ad791c\") " Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.820077 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3e0515-9960-40e4-a938-6166810db59e-operator-scripts\") pod \"8a3e0515-9960-40e4-a938-6166810db59e\" (UID: \"8a3e0515-9960-40e4-a938-6166810db59e\") " Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.820198 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48014bf6-50e6-407c-8fca-bd2949ad791c-operator-scripts\") pod \"48014bf6-50e6-407c-8fca-bd2949ad791c\" (UID: \"48014bf6-50e6-407c-8fca-bd2949ad791c\") " Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.820252 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfrvh\" (UniqueName: \"kubernetes.io/projected/8a3e0515-9960-40e4-a938-6166810db59e-kube-api-access-nfrvh\") pod \"8a3e0515-9960-40e4-a938-6166810db59e\" (UID: \"8a3e0515-9960-40e4-a938-6166810db59e\") " Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.820701 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/167038ac-1986-4fec-ae8e-98807f212a49-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.820718 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6gvp\" (UniqueName: \"kubernetes.io/projected/167038ac-1986-4fec-ae8e-98807f212a49-kube-api-access-s6gvp\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.820892 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a3e0515-9960-40e4-a938-6166810db59e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a3e0515-9960-40e4-a938-6166810db59e" (UID: "8a3e0515-9960-40e4-a938-6166810db59e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.821158 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48014bf6-50e6-407c-8fca-bd2949ad791c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48014bf6-50e6-407c-8fca-bd2949ad791c" (UID: "48014bf6-50e6-407c-8fca-bd2949ad791c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.824379 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a3e0515-9960-40e4-a938-6166810db59e-kube-api-access-nfrvh" (OuterVolumeSpecName: "kube-api-access-nfrvh") pod "8a3e0515-9960-40e4-a938-6166810db59e" (UID: "8a3e0515-9960-40e4-a938-6166810db59e"). InnerVolumeSpecName "kube-api-access-nfrvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.825573 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48014bf6-50e6-407c-8fca-bd2949ad791c-kube-api-access-cbcdk" (OuterVolumeSpecName: "kube-api-access-cbcdk") pod "48014bf6-50e6-407c-8fca-bd2949ad791c" (UID: "48014bf6-50e6-407c-8fca-bd2949ad791c"). InnerVolumeSpecName "kube-api-access-cbcdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.921892 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hg8s\" (UniqueName: \"kubernetes.io/projected/e67b8129-4a9a-4459-b85d-45ae30ad425e-kube-api-access-6hg8s\") pod \"e67b8129-4a9a-4459-b85d-45ae30ad425e\" (UID: \"e67b8129-4a9a-4459-b85d-45ae30ad425e\") " Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.921962 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e67b8129-4a9a-4459-b85d-45ae30ad425e-operator-scripts\") pod \"e67b8129-4a9a-4459-b85d-45ae30ad425e\" (UID: \"e67b8129-4a9a-4459-b85d-45ae30ad425e\") " Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.922073 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d480a45-ff2a-4672-bbf4-05a8a397b34a-operator-scripts\") pod \"9d480a45-ff2a-4672-bbf4-05a8a397b34a\" (UID: \"9d480a45-ff2a-4672-bbf4-05a8a397b34a\") " Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.922127 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214a0432-7622-45a7-b693-f5aea45623e7-operator-scripts\") pod \"214a0432-7622-45a7-b693-f5aea45623e7\" (UID: \"214a0432-7622-45a7-b693-f5aea45623e7\") " Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.922163 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d59hw\" (UniqueName: \"kubernetes.io/projected/9d480a45-ff2a-4672-bbf4-05a8a397b34a-kube-api-access-d59hw\") pod \"9d480a45-ff2a-4672-bbf4-05a8a397b34a\" (UID: \"9d480a45-ff2a-4672-bbf4-05a8a397b34a\") " Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.922330 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d4mf\" (UniqueName: \"kubernetes.io/projected/214a0432-7622-45a7-b693-f5aea45623e7-kube-api-access-9d4mf\") pod \"214a0432-7622-45a7-b693-f5aea45623e7\" (UID: \"214a0432-7622-45a7-b693-f5aea45623e7\") " Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.922795 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48014bf6-50e6-407c-8fca-bd2949ad791c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.922808 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfrvh\" (UniqueName: \"kubernetes.io/projected/8a3e0515-9960-40e4-a938-6166810db59e-kube-api-access-nfrvh\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.922821 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbcdk\" (UniqueName: \"kubernetes.io/projected/48014bf6-50e6-407c-8fca-bd2949ad791c-kube-api-access-cbcdk\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.922830 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3e0515-9960-40e4-a938-6166810db59e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.922913 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67b8129-4a9a-4459-b85d-45ae30ad425e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e67b8129-4a9a-4459-b85d-45ae30ad425e" (UID: "e67b8129-4a9a-4459-b85d-45ae30ad425e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.924627 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214a0432-7622-45a7-b693-f5aea45623e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "214a0432-7622-45a7-b693-f5aea45623e7" (UID: "214a0432-7622-45a7-b693-f5aea45623e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.925381 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d480a45-ff2a-4672-bbf4-05a8a397b34a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d480a45-ff2a-4672-bbf4-05a8a397b34a" (UID: "9d480a45-ff2a-4672-bbf4-05a8a397b34a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.926571 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67b8129-4a9a-4459-b85d-45ae30ad425e-kube-api-access-6hg8s" (OuterVolumeSpecName: "kube-api-access-6hg8s") pod "e67b8129-4a9a-4459-b85d-45ae30ad425e" (UID: "e67b8129-4a9a-4459-b85d-45ae30ad425e"). InnerVolumeSpecName "kube-api-access-6hg8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.927442 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214a0432-7622-45a7-b693-f5aea45623e7-kube-api-access-9d4mf" (OuterVolumeSpecName: "kube-api-access-9d4mf") pod "214a0432-7622-45a7-b693-f5aea45623e7" (UID: "214a0432-7622-45a7-b693-f5aea45623e7"). InnerVolumeSpecName "kube-api-access-9d4mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:52.928203 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d480a45-ff2a-4672-bbf4-05a8a397b34a-kube-api-access-d59hw" (OuterVolumeSpecName: "kube-api-access-d59hw") pod "9d480a45-ff2a-4672-bbf4-05a8a397b34a" (UID: "9d480a45-ff2a-4672-bbf4-05a8a397b34a"). InnerVolumeSpecName "kube-api-access-d59hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.024673 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214a0432-7622-45a7-b693-f5aea45623e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.024706 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d59hw\" (UniqueName: \"kubernetes.io/projected/9d480a45-ff2a-4672-bbf4-05a8a397b34a-kube-api-access-d59hw\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.024723 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d4mf\" (UniqueName: \"kubernetes.io/projected/214a0432-7622-45a7-b693-f5aea45623e7-kube-api-access-9d4mf\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.024735 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hg8s\" (UniqueName: \"kubernetes.io/projected/e67b8129-4a9a-4459-b85d-45ae30ad425e-kube-api-access-6hg8s\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.024748 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e67b8129-4a9a-4459-b85d-45ae30ad425e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.024761 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d480a45-ff2a-4672-bbf4-05a8a397b34a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.489202 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kfw6n" event={"ID":"e67b8129-4a9a-4459-b85d-45ae30ad425e","Type":"ContainerDied","Data":"82629ff2c25daa0dc00690f13adf4cff26e8cd1ae06da39261bf3cca08a3b7ee"} Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.489450 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82629ff2c25daa0dc00690f13adf4cff26e8cd1ae06da39261bf3cca08a3b7ee" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.489268 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kfw6n" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.491185 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0d0f-account-create-update-4tkms" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.491185 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0d0f-account-create-update-4tkms" event={"ID":"8a3e0515-9960-40e4-a938-6166810db59e","Type":"ContainerDied","Data":"58e93a8fbd0576eb054e2f13c1db8567dd45b3201b1afd83eb616eaa987bdc19"} Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.491261 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58e93a8fbd0576eb054e2f13c1db8567dd45b3201b1afd83eb616eaa987bdc19" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.493143 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-22wxq" event={"ID":"214a0432-7622-45a7-b693-f5aea45623e7","Type":"ContainerDied","Data":"935b20b6af477310e56200b08ce3d5e7a86cb3977ff659ad589b2609072c3be5"} Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.493176 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="935b20b6af477310e56200b08ce3d5e7a86cb3977ff659ad589b2609072c3be5" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.493175 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-22wxq" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.496475 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3d0f-account-create-update-bzj6x" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.496544 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3d0f-account-create-update-bzj6x" event={"ID":"9d480a45-ff2a-4672-bbf4-05a8a397b34a","Type":"ContainerDied","Data":"cb0e0b3893d63d7c64f1bdc79d7c3ecc41d9c7062d05119558a99a914a0479b3"} Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.496576 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb0e0b3893d63d7c64f1bdc79d7c3ecc41d9c7062d05119558a99a914a0479b3" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.498376 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sl6nb" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.499378 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c29b-account-create-update-whsc2" Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.500548 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c29b-account-create-update-whsc2" event={"ID":"48014bf6-50e6-407c-8fca-bd2949ad791c","Type":"ContainerDied","Data":"b1c430556fec699713df29b86634ae0e908a12750c5ba1ab804715798d6b4911"} Jan 09 13:49:54 crc kubenswrapper[4919]: I0109 13:49:53.500572 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1c430556fec699713df29b86634ae0e908a12750c5ba1ab804715798d6b4911" Jan 09 13:49:55 crc kubenswrapper[4919]: I0109 13:49:55.237397 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cdrjp"] Jan 09 13:49:58 crc kubenswrapper[4919]: I0109 13:49:58.537713 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" event={"ID":"de460845-14e7-4fa3-bc01-4bc4a40b18df","Type":"ContainerStarted","Data":"89999dbda78aef1e5aa9bd36d0bb202027bd640e4c4a2d6349646e9c3e66aedc"} Jan 09 13:49:59 crc kubenswrapper[4919]: I0109 13:49:59.547316 4919 generic.go:334] "Generic (PLEG): container finished" podID="de460845-14e7-4fa3-bc01-4bc4a40b18df" containerID="8aefffbf1a3218e5dd1cbaac0070dccc19f36bda8e1f8d5ae8c2d6e1b5d95c13" exitCode=0 Jan 09 13:49:59 crc kubenswrapper[4919]: I0109 13:49:59.547381 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" event={"ID":"de460845-14e7-4fa3-bc01-4bc4a40b18df","Type":"ContainerDied","Data":"8aefffbf1a3218e5dd1cbaac0070dccc19f36bda8e1f8d5ae8c2d6e1b5d95c13"} Jan 09 13:49:59 crc kubenswrapper[4919]: I0109 13:49:59.549503 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7hqg" event={"ID":"15f289e5-b950-489a-8207-5b340be14c0e","Type":"ContainerStarted","Data":"d9959a3c0d1479bc458e344a5d27a2ed1fd84088d28c3a15200dba329e6d8ca1"} Jan 09 13:49:59 crc kubenswrapper[4919]: I0109 13:49:59.594000 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-r7hqg" podStartSLOduration=2.614778489 podStartE2EDuration="11.593964849s" podCreationTimestamp="2026-01-09 13:49:48 +0000 UTC" firstStartedPulling="2026-01-09 13:49:49.557177159 +0000 UTC m=+1169.105016609" lastFinishedPulling="2026-01-09 13:49:58.536363519 +0000 UTC m=+1178.084202969" observedRunningTime="2026-01-09 13:49:59.587021406 +0000 UTC m=+1179.134860856" watchObservedRunningTime="2026-01-09 13:49:59.593964849 +0000 UTC m=+1179.141804299" Jan 09 13:50:00 crc kubenswrapper[4919]: I0109 13:50:00.559344 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" event={"ID":"de460845-14e7-4fa3-bc01-4bc4a40b18df","Type":"ContainerStarted","Data":"c54e9cdf147699bcc38281c21b4df4d9d5dc8f768de4c43a93e94a1a1e5c15a7"} Jan 09 13:50:01 crc kubenswrapper[4919]: I0109 13:50:01.567486 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:50:02 crc kubenswrapper[4919]: I0109 13:50:02.578183 4919 generic.go:334] "Generic (PLEG): container finished" podID="15f289e5-b950-489a-8207-5b340be14c0e" containerID="d9959a3c0d1479bc458e344a5d27a2ed1fd84088d28c3a15200dba329e6d8ca1" exitCode=0 Jan 09 13:50:02 crc kubenswrapper[4919]: I0109 13:50:02.578292 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7hqg" event={"ID":"15f289e5-b950-489a-8207-5b340be14c0e","Type":"ContainerDied","Data":"d9959a3c0d1479bc458e344a5d27a2ed1fd84088d28c3a15200dba329e6d8ca1"} Jan 09 13:50:02 crc kubenswrapper[4919]: I0109 13:50:02.606922 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" podStartSLOduration=10.606901916 podStartE2EDuration="10.606901916s" podCreationTimestamp="2026-01-09 13:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:00.577596741 +0000 UTC m=+1180.125436221" watchObservedRunningTime="2026-01-09 13:50:02.606901916 +0000 UTC m=+1182.154741386" Jan 09 13:50:03 crc kubenswrapper[4919]: I0109 13:50:03.969859 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.117996 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-config-data\") pod \"15f289e5-b950-489a-8207-5b340be14c0e\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.118473 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qrfc\" (UniqueName: \"kubernetes.io/projected/15f289e5-b950-489a-8207-5b340be14c0e-kube-api-access-6qrfc\") pod \"15f289e5-b950-489a-8207-5b340be14c0e\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.118513 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-combined-ca-bundle\") pod \"15f289e5-b950-489a-8207-5b340be14c0e\" (UID: \"15f289e5-b950-489a-8207-5b340be14c0e\") " Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.127431 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f289e5-b950-489a-8207-5b340be14c0e-kube-api-access-6qrfc" (OuterVolumeSpecName: "kube-api-access-6qrfc") pod "15f289e5-b950-489a-8207-5b340be14c0e" (UID: "15f289e5-b950-489a-8207-5b340be14c0e"). InnerVolumeSpecName "kube-api-access-6qrfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.142855 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15f289e5-b950-489a-8207-5b340be14c0e" (UID: "15f289e5-b950-489a-8207-5b340be14c0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.159032 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-config-data" (OuterVolumeSpecName: "config-data") pod "15f289e5-b950-489a-8207-5b340be14c0e" (UID: "15f289e5-b950-489a-8207-5b340be14c0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.220968 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qrfc\" (UniqueName: \"kubernetes.io/projected/15f289e5-b950-489a-8207-5b340be14c0e-kube-api-access-6qrfc\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.221008 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.221021 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f289e5-b950-489a-8207-5b340be14c0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.598636 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7hqg" event={"ID":"15f289e5-b950-489a-8207-5b340be14c0e","Type":"ContainerDied","Data":"c5e38d04c225aedf2f85967a97c12c589376e2eb414b1b0967d03fa5ef481e5e"} Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.598690 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5e38d04c225aedf2f85967a97c12c589376e2eb414b1b0967d03fa5ef481e5e" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.599184 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7hqg" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.900164 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8gzn7"] Jan 09 13:50:04 crc kubenswrapper[4919]: E0109 13:50:04.900643 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214a0432-7622-45a7-b693-f5aea45623e7" containerName="mariadb-database-create" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.900660 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="214a0432-7622-45a7-b693-f5aea45623e7" containerName="mariadb-database-create" Jan 09 13:50:04 crc kubenswrapper[4919]: E0109 13:50:04.900688 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3e0515-9960-40e4-a938-6166810db59e" containerName="mariadb-account-create-update" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.900695 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3e0515-9960-40e4-a938-6166810db59e" containerName="mariadb-account-create-update" Jan 09 13:50:04 crc kubenswrapper[4919]: E0109 13:50:04.900710 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d480a45-ff2a-4672-bbf4-05a8a397b34a" containerName="mariadb-account-create-update" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.900718 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d480a45-ff2a-4672-bbf4-05a8a397b34a" containerName="mariadb-account-create-update" Jan 09 13:50:04 crc kubenswrapper[4919]: E0109 13:50:04.900740 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="167038ac-1986-4fec-ae8e-98807f212a49" containerName="mariadb-database-create" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.900748 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="167038ac-1986-4fec-ae8e-98807f212a49" containerName="mariadb-database-create" Jan 09 13:50:04 crc kubenswrapper[4919]: E0109 13:50:04.900762 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67b8129-4a9a-4459-b85d-45ae30ad425e" containerName="mariadb-database-create" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.900769 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67b8129-4a9a-4459-b85d-45ae30ad425e" containerName="mariadb-database-create" Jan 09 13:50:04 crc kubenswrapper[4919]: E0109 13:50:04.900786 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f289e5-b950-489a-8207-5b340be14c0e" containerName="keystone-db-sync" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.900793 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f289e5-b950-489a-8207-5b340be14c0e" containerName="keystone-db-sync" Jan 09 13:50:04 crc kubenswrapper[4919]: E0109 13:50:04.900807 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48014bf6-50e6-407c-8fca-bd2949ad791c" containerName="mariadb-account-create-update" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.900814 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="48014bf6-50e6-407c-8fca-bd2949ad791c" containerName="mariadb-account-create-update" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.901010 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="48014bf6-50e6-407c-8fca-bd2949ad791c" containerName="mariadb-account-create-update" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.901021 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="167038ac-1986-4fec-ae8e-98807f212a49" containerName="mariadb-database-create" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.901037 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f289e5-b950-489a-8207-5b340be14c0e" containerName="keystone-db-sync" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.901047 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="214a0432-7622-45a7-b693-f5aea45623e7" containerName="mariadb-database-create" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.901055 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67b8129-4a9a-4459-b85d-45ae30ad425e" containerName="mariadb-database-create" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.901065 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a3e0515-9960-40e4-a938-6166810db59e" containerName="mariadb-account-create-update" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.901075 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d480a45-ff2a-4672-bbf4-05a8a397b34a" containerName="mariadb-account-create-update" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.901746 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.906237 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.906539 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.906712 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.906984 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.907201 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7w5b5" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.914011 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cdrjp"] Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.914249 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" podUID="de460845-14e7-4fa3-bc01-4bc4a40b18df" containerName="dnsmasq-dns" containerID="cri-o://c54e9cdf147699bcc38281c21b4df4d9d5dc8f768de4c43a93e94a1a1e5c15a7" gracePeriod=10 Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.926468 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.928913 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8gzn7"] Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.956669 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-6fk5j"] Jan 09 13:50:04 crc kubenswrapper[4919]: I0109 13:50:04.958338 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.004182 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-6fk5j"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.037160 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-fernet-keys\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.037268 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-credential-keys\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.037311 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-config-data\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.037348 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-combined-ca-bundle\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.037391 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-scripts\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.037437 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v747z\" (UniqueName: \"kubernetes.io/projected/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-kube-api-access-v747z\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.086753 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vz5pd"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.088144 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.099592 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v8q9p" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.099947 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.100175 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.112582 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67b8c5bf6f-wsqfv"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.114092 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.123979 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.124639 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-vmcmw" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.124848 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.137136 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139108 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-fernet-keys\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139155 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-credential-keys\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139204 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139243 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-config-data\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139276 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-combined-ca-bundle\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139298 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-config\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139328 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-scripts\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139349 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139375 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139396 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v747z\" (UniqueName: \"kubernetes.io/projected/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-kube-api-access-v747z\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139416 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.139441 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9znp\" (UniqueName: \"kubernetes.io/projected/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-kube-api-access-v9znp\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.148164 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-credential-keys\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.148511 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vz5pd"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.149381 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-config-data\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.161969 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-fernet-keys\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.165555 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-combined-ca-bundle\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.166529 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-scripts\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.177178 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v747z\" (UniqueName: \"kubernetes.io/projected/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-kube-api-access-v747z\") pod \"keystone-bootstrap-8gzn7\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.195020 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67b8c5bf6f-wsqfv"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.235719 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.240925 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.240996 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241030 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241052 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9rc9\" (UniqueName: \"kubernetes.io/projected/0a9f81fc-067d-404d-b104-bba333d3911a-kube-api-access-x9rc9\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241070 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghcz\" (UniqueName: \"kubernetes.io/projected/da695d3c-0710-4113-ad5c-6168aa3bbe2b-kube-api-access-zghcz\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241092 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9znp\" (UniqueName: \"kubernetes.io/projected/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-kube-api-access-v9znp\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241117 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-db-sync-config-data\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241150 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da695d3c-0710-4113-ad5c-6168aa3bbe2b-logs\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241172 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-config-data\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241188 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/da695d3c-0710-4113-ad5c-6168aa3bbe2b-horizon-secret-key\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241235 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a9f81fc-067d-404d-b104-bba333d3911a-etc-machine-id\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241254 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-combined-ca-bundle\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241276 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-config-data\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241292 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241318 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-scripts\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241342 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-scripts\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.241368 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-config\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.242176 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-config\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.242724 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.246559 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.247163 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.248163 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.261271 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-dr79l"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.262421 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.271363 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.271641 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4hvlp" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.271866 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.284288 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dr79l"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.308022 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9znp\" (UniqueName: \"kubernetes.io/projected/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-kube-api-access-v9znp\") pod \"dnsmasq-dns-5fdbfbc95f-6fk5j\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.348190 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a9f81fc-067d-404d-b104-bba333d3911a-etc-machine-id\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.348419 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-combined-ca-bundle\") pod \"neutron-db-sync-dr79l\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.348447 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-combined-ca-bundle\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.348562 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-config-data\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.348635 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-scripts\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.348817 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-scripts\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.348911 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-config\") pod \"neutron-db-sync-dr79l\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.349183 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9rc9\" (UniqueName: \"kubernetes.io/projected/0a9f81fc-067d-404d-b104-bba333d3911a-kube-api-access-x9rc9\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.349230 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zghcz\" (UniqueName: \"kubernetes.io/projected/da695d3c-0710-4113-ad5c-6168aa3bbe2b-kube-api-access-zghcz\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.349312 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-db-sync-config-data\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.349510 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da695d3c-0710-4113-ad5c-6168aa3bbe2b-logs\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.349559 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vzs\" (UniqueName: \"kubernetes.io/projected/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-kube-api-access-47vzs\") pod \"neutron-db-sync-dr79l\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.349586 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-config-data\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.349611 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/da695d3c-0710-4113-ad5c-6168aa3bbe2b-horizon-secret-key\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.350504 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-6fk5j"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.352423 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a9f81fc-067d-404d-b104-bba333d3911a-etc-machine-id\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.353601 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da695d3c-0710-4113-ad5c-6168aa3bbe2b-logs\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.356034 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-scripts\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.358778 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-config-data\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.359427 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.360724 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-combined-ca-bundle\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.378360 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-db-sync-config-data\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.388867 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-scripts\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.389963 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-config-data\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.410795 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/da695d3c-0710-4113-ad5c-6168aa3bbe2b-horizon-secret-key\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.415525 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-425dh"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.419806 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zghcz\" (UniqueName: \"kubernetes.io/projected/da695d3c-0710-4113-ad5c-6168aa3bbe2b-kube-api-access-zghcz\") pod \"horizon-67b8c5bf6f-wsqfv\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.421507 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.427183 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hvfgk" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.434473 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9rc9\" (UniqueName: \"kubernetes.io/projected/0a9f81fc-067d-404d-b104-bba333d3911a-kube-api-access-x9rc9\") pod \"cinder-db-sync-vz5pd\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.434525 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.434810 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.467980 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-425dh"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.468531 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47vzs\" (UniqueName: \"kubernetes.io/projected/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-kube-api-access-47vzs\") pod \"neutron-db-sync-dr79l\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.468895 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-combined-ca-bundle\") pod \"neutron-db-sync-dr79l\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.469329 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-config\") pod \"neutron-db-sync-dr79l\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.472580 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-config\") pod \"neutron-db-sync-dr79l\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.473225 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-combined-ca-bundle\") pod \"neutron-db-sync-dr79l\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.495493 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-cmf6h"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.506698 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9sb8m"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.508064 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.509591 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47vzs\" (UniqueName: \"kubernetes.io/projected/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-kube-api-access-47vzs\") pod \"neutron-db-sync-dr79l\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.509699 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.510912 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.511021 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6bpwk" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.517394 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-cmf6h"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.527411 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75f5cb997-6q6lj"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.530013 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.539609 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9sb8m"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.548778 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75f5cb997-6q6lj"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.553045 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.562270 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.564413 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.575404 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.575860 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.581706 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-config-data\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.581755 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwpz\" (UniqueName: \"kubernetes.io/projected/93e28fcf-1c97-40cf-bcdc-d63d2af19499-kube-api-access-njwpz\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.581951 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e28fcf-1c97-40cf-bcdc-d63d2af19499-logs\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.581997 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-combined-ca-bundle\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.582011 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-scripts\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.617724 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.644164 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.646046 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.650649 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.650649 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.650825 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.653523 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qpdkt" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.656037 4919 generic.go:334] "Generic (PLEG): container finished" podID="de460845-14e7-4fa3-bc01-4bc4a40b18df" containerID="c54e9cdf147699bcc38281c21b4df4d9d5dc8f768de4c43a93e94a1a1e5c15a7" exitCode=0 Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.656093 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" event={"ID":"de460845-14e7-4fa3-bc01-4bc4a40b18df","Type":"ContainerDied","Data":"c54e9cdf147699bcc38281c21b4df4d9d5dc8f768de4c43a93e94a1a1e5c15a7"} Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.667150 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.687659 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e28fcf-1c97-40cf-bcdc-d63d2af19499-logs\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.687714 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-scripts\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.687745 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-config\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.687769 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-log-httpd\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.687788 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-run-httpd\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.687853 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-scripts\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.687909 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-combined-ca-bundle\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.687929 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-scripts\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.687962 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-db-sync-config-data\") pod \"barbican-db-sync-9sb8m\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.687990 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdqj\" (UniqueName: \"kubernetes.io/projected/94e437ba-f67c-41cb-887b-a1d977b041f8-kube-api-access-ktdqj\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688022 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-combined-ca-bundle\") pod \"barbican-db-sync-9sb8m\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688050 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94e437ba-f67c-41cb-887b-a1d977b041f8-horizon-secret-key\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688071 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-config-data\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688100 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-982gj\" (UniqueName: \"kubernetes.io/projected/bec76c49-6c38-4168-ac7b-087460106d25-kube-api-access-982gj\") pod \"barbican-db-sync-9sb8m\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688125 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwwhf\" (UniqueName: \"kubernetes.io/projected/82036277-9b0e-4efd-8da5-9463b9998096-kube-api-access-gwwhf\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688147 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688176 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688229 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-config-data\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688258 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688288 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwpz\" (UniqueName: \"kubernetes.io/projected/93e28fcf-1c97-40cf-bcdc-d63d2af19499-kube-api-access-njwpz\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688314 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj4qx\" (UniqueName: \"kubernetes.io/projected/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-kube-api-access-qj4qx\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688349 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688371 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688420 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-config-data\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688437 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94e437ba-f67c-41cb-887b-a1d977b041f8-logs\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688473 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.688949 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e28fcf-1c97-40cf-bcdc-d63d2af19499-logs\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.694435 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-combined-ca-bundle\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.698365 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-scripts\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.716569 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-config-data\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.728745 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.729330 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.736157 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwpz\" (UniqueName: \"kubernetes.io/projected/93e28fcf-1c97-40cf-bcdc-d63d2af19499-kube-api-access-njwpz\") pod \"placement-db-sync-425dh\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.774900 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-425dh" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.789757 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9gx\" (UniqueName: \"kubernetes.io/projected/938c9f0b-c5af-49f8-9cc2-5e87f688775b-kube-api-access-zm9gx\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.789825 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-scripts\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.789861 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-log-httpd\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.789891 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-config\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.789922 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-run-httpd\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.789945 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-scripts\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.789990 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-db-sync-config-data\") pod \"barbican-db-sync-9sb8m\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790025 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktdqj\" (UniqueName: \"kubernetes.io/projected/94e437ba-f67c-41cb-887b-a1d977b041f8-kube-api-access-ktdqj\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790051 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790077 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-combined-ca-bundle\") pod \"barbican-db-sync-9sb8m\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790094 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790124 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94e437ba-f67c-41cb-887b-a1d977b041f8-horizon-secret-key\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790149 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-config-data\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790194 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-982gj\" (UniqueName: \"kubernetes.io/projected/bec76c49-6c38-4168-ac7b-087460106d25-kube-api-access-982gj\") pod \"barbican-db-sync-9sb8m\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790295 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwwhf\" (UniqueName: \"kubernetes.io/projected/82036277-9b0e-4efd-8da5-9463b9998096-kube-api-access-gwwhf\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790327 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790353 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790376 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790392 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj4qx\" (UniqueName: \"kubernetes.io/projected/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-kube-api-access-qj4qx\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790425 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-logs\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790452 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790478 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790526 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790584 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-config-data\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790612 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94e437ba-f67c-41cb-887b-a1d977b041f8-logs\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790643 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-config-data\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790670 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-scripts\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790698 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.790863 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.792131 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-scripts\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.806095 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-config\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.806399 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-run-httpd\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.810013 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.810114 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-log-httpd\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.810357 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.810483 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-config-data\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.810504 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94e437ba-f67c-41cb-887b-a1d977b041f8-logs\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.811263 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-scripts\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.811592 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.814477 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.831993 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-combined-ca-bundle\") pod \"barbican-db-sync-9sb8m\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.833265 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-db-sync-config-data\") pod \"barbican-db-sync-9sb8m\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.837565 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94e437ba-f67c-41cb-887b-a1d977b041f8-horizon-secret-key\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.837786 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.838351 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-982gj\" (UniqueName: \"kubernetes.io/projected/bec76c49-6c38-4168-ac7b-087460106d25-kube-api-access-982gj\") pod \"barbican-db-sync-9sb8m\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.839119 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwwhf\" (UniqueName: \"kubernetes.io/projected/82036277-9b0e-4efd-8da5-9463b9998096-kube-api-access-gwwhf\") pod \"dnsmasq-dns-6f6f8cb849-cmf6h\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.840508 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj4qx\" (UniqueName: \"kubernetes.io/projected/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-kube-api-access-qj4qx\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.843554 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktdqj\" (UniqueName: \"kubernetes.io/projected/94e437ba-f67c-41cb-887b-a1d977b041f8-kube-api-access-ktdqj\") pod \"horizon-75f5cb997-6q6lj\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.858270 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.876990 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-config-data\") pod \"ceilometer-0\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.882612 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.892577 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-logs\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.892634 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.892720 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-config-data\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.892753 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-scripts\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.892799 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.892836 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9gx\" (UniqueName: \"kubernetes.io/projected/938c9f0b-c5af-49f8-9cc2-5e87f688775b-kube-api-access-zm9gx\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.892891 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.892935 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.893526 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.894603 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-logs\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.896156 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.908269 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.908785 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.907200 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.909698 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-config-data\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.912310 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9gx\" (UniqueName: \"kubernetes.io/projected/938c9f0b-c5af-49f8-9cc2-5e87f688775b-kube-api-access-zm9gx\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.916586 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-scripts\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.923500 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.953616 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.953832 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:50:05 crc kubenswrapper[4919]: I0109 13:50:05.976998 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.242306 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.244361 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.249769 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8gzn7"] Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.256117 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.256352 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.262277 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.346023 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.346071 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.346101 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.346134 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-logs\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.346155 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hj5f\" (UniqueName: \"kubernetes.io/projected/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-kube-api-access-2hj5f\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.346196 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.346270 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.346287 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.451735 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.452160 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.452188 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.452258 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-logs\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.452299 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hj5f\" (UniqueName: \"kubernetes.io/projected/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-kube-api-access-2hj5f\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.452373 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.452493 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.452531 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.453913 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-logs\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.457420 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.459395 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.462499 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.488925 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.496422 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.507535 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hj5f\" (UniqueName: \"kubernetes.io/projected/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-kube-api-access-2hj5f\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.511812 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.591310 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.672345 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8gzn7" event={"ID":"6a6c75da-52dc-426a-95e8-e7d0a0ff3910","Type":"ContainerStarted","Data":"7e643b6da4ef32701befc9e24432fa3f7e028ae9c9cd5891e1c2ac67fe07bd61"} Jan 09 13:50:06 crc kubenswrapper[4919]: I0109 13:50:06.688114 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.038864 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:50:07 crc kubenswrapper[4919]: W0109 13:50:07.143410 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a9f81fc_067d_404d_b104_bba333d3911a.slice/crio-c6b0b226e71e563793d5d8ba1a5e2b23666bf61285a03da27e1c18aa9190a0eb WatchSource:0}: Error finding container c6b0b226e71e563793d5d8ba1a5e2b23666bf61285a03da27e1c18aa9190a0eb: Status 404 returned error can't find the container with id c6b0b226e71e563793d5d8ba1a5e2b23666bf61285a03da27e1c18aa9190a0eb Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.149717 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vz5pd"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.175878 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-swift-storage-0\") pod \"de460845-14e7-4fa3-bc01-4bc4a40b18df\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.176356 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-sb\") pod \"de460845-14e7-4fa3-bc01-4bc4a40b18df\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.176439 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzkwk\" (UniqueName: \"kubernetes.io/projected/de460845-14e7-4fa3-bc01-4bc4a40b18df-kube-api-access-dzkwk\") pod \"de460845-14e7-4fa3-bc01-4bc4a40b18df\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.176525 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-config\") pod \"de460845-14e7-4fa3-bc01-4bc4a40b18df\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.176602 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-nb\") pod \"de460845-14e7-4fa3-bc01-4bc4a40b18df\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.176637 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-svc\") pod \"de460845-14e7-4fa3-bc01-4bc4a40b18df\" (UID: \"de460845-14e7-4fa3-bc01-4bc4a40b18df\") " Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.193053 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de460845-14e7-4fa3-bc01-4bc4a40b18df-kube-api-access-dzkwk" (OuterVolumeSpecName: "kube-api-access-dzkwk") pod "de460845-14e7-4fa3-bc01-4bc4a40b18df" (UID: "de460845-14e7-4fa3-bc01-4bc4a40b18df"). InnerVolumeSpecName "kube-api-access-dzkwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.254991 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67b8c5bf6f-wsqfv"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.288996 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzkwk\" (UniqueName: \"kubernetes.io/projected/de460845-14e7-4fa3-bc01-4bc4a40b18df-kube-api-access-dzkwk\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.289191 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9sb8m"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.356909 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.419364 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dr79l"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.432141 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-config" (OuterVolumeSpecName: "config") pod "de460845-14e7-4fa3-bc01-4bc4a40b18df" (UID: "de460845-14e7-4fa3-bc01-4bc4a40b18df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.434405 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-6fk5j"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.467135 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67b8c5bf6f-wsqfv"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.478259 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de460845-14e7-4fa3-bc01-4bc4a40b18df" (UID: "de460845-14e7-4fa3-bc01-4bc4a40b18df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.507552 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6965595b5-x8vc9"] Jan 09 13:50:07 crc kubenswrapper[4919]: E0109 13:50:07.508049 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de460845-14e7-4fa3-bc01-4bc4a40b18df" containerName="dnsmasq-dns" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.508065 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="de460845-14e7-4fa3-bc01-4bc4a40b18df" containerName="dnsmasq-dns" Jan 09 13:50:07 crc kubenswrapper[4919]: E0109 13:50:07.508089 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de460845-14e7-4fa3-bc01-4bc4a40b18df" containerName="init" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.508096 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="de460845-14e7-4fa3-bc01-4bc4a40b18df" containerName="init" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.509018 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="de460845-14e7-4fa3-bc01-4bc4a40b18df" containerName="dnsmasq-dns" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.510140 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.514718 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-scripts\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.529920 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81007298-01d9-43a2-8e26-33448a1d17e0-horizon-secret-key\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.530022 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcjkw\" (UniqueName: \"kubernetes.io/projected/81007298-01d9-43a2-8e26-33448a1d17e0-kube-api-access-vcjkw\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.530090 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81007298-01d9-43a2-8e26-33448a1d17e0-logs\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.530369 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-config-data\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.530490 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.530521 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.537736 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.565380 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6965595b5-x8vc9"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.615995 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.620019 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "de460845-14e7-4fa3-bc01-4bc4a40b18df" (UID: "de460845-14e7-4fa3-bc01-4bc4a40b18df"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.632323 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-config-data\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.632403 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-scripts\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.632475 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81007298-01d9-43a2-8e26-33448a1d17e0-horizon-secret-key\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.632501 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcjkw\" (UniqueName: \"kubernetes.io/projected/81007298-01d9-43a2-8e26-33448a1d17e0-kube-api-access-vcjkw\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.632530 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81007298-01d9-43a2-8e26-33448a1d17e0-logs\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.633286 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-scripts\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.633980 4919 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.634015 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81007298-01d9-43a2-8e26-33448a1d17e0-logs\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.635566 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-config-data\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.643656 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81007298-01d9-43a2-8e26-33448a1d17e0-horizon-secret-key\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.653905 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-cmf6h"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.676889 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcjkw\" (UniqueName: \"kubernetes.io/projected/81007298-01d9-43a2-8e26-33448a1d17e0-kube-api-access-vcjkw\") pod \"horizon-6965595b5-x8vc9\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.690549 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-425dh"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.707014 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.710648 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de460845-14e7-4fa3-bc01-4bc4a40b18df" (UID: "de460845-14e7-4fa3-bc01-4bc4a40b18df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.718379 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75f5cb997-6q6lj"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.728161 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" event={"ID":"82036277-9b0e-4efd-8da5-9463b9998096","Type":"ContainerStarted","Data":"5da999fc322046caae7652bca8503ae93d9ba6625daa09989c69bda0cfd6eb9e"} Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.735521 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75f5cb997-6q6lj" event={"ID":"94e437ba-f67c-41cb-887b-a1d977b041f8","Type":"ContainerStarted","Data":"c801586b8625963af6cc0c75bf1cd0d8e350ba90afc915d952b22e171ebc2814"} Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.737456 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.742914 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.742913 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-cdrjp" event={"ID":"de460845-14e7-4fa3-bc01-4bc4a40b18df","Type":"ContainerDied","Data":"89999dbda78aef1e5aa9bd36d0bb202027bd640e4c4a2d6349646e9c3e66aedc"} Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.743285 4919 scope.go:117] "RemoveContainer" containerID="c54e9cdf147699bcc38281c21b4df4d9d5dc8f768de4c43a93e94a1a1e5c15a7" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.752516 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vz5pd" event={"ID":"0a9f81fc-067d-404d-b104-bba333d3911a","Type":"ContainerStarted","Data":"c6b0b226e71e563793d5d8ba1a5e2b23666bf61285a03da27e1c18aa9190a0eb"} Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.758991 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9sb8m" event={"ID":"bec76c49-6c38-4168-ac7b-087460106d25","Type":"ContainerStarted","Data":"25ec9a1e6e956b733a250dbac76c6dbdf768a99a2ad7c21ebaa4d35e2e7d3c3b"} Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.761173 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.769189 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de460845-14e7-4fa3-bc01-4bc4a40b18df" (UID: "de460845-14e7-4fa3-bc01-4bc4a40b18df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.769631 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67b8c5bf6f-wsqfv" event={"ID":"da695d3c-0710-4113-ad5c-6168aa3bbe2b","Type":"ContainerStarted","Data":"55e704a6767373a0bdbe3a2d11e2adb20d59a3bc76fd1fab4f2be9a1c1e31895"} Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.771675 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dr79l" event={"ID":"a26e4dbc-6f44-4723-a81b-7bd05ca1283b","Type":"ContainerStarted","Data":"e5f13f53cccc56b9ae0aa631d93ee9417ee43d28fa55ba3aa2fc3e535ad82c29"} Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.776526 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2","Type":"ContainerStarted","Data":"ffc03854ca52909d1d132a502860c7561dde167bfec904951f573369ff08f806"} Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.779745 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" event={"ID":"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d","Type":"ContainerStarted","Data":"7085b473fe6b6bd18b8a493b2a037609e6e2f7ee2bdb13553333aad9a1d82a95"} Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.794490 4919 scope.go:117] "RemoveContainer" containerID="8aefffbf1a3218e5dd1cbaac0070dccc19f36bda8e1f8d5ae8c2d6e1b5d95c13" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.841923 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de460845-14e7-4fa3-bc01-4bc4a40b18df-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.859853 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:07 crc kubenswrapper[4919]: I0109 13:50:07.952288 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:08 crc kubenswrapper[4919]: I0109 13:50:08.179977 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cdrjp"] Jan 09 13:50:08 crc kubenswrapper[4919]: I0109 13:50:08.190995 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cdrjp"] Jan 09 13:50:08 crc kubenswrapper[4919]: I0109 13:50:08.366277 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6965595b5-x8vc9"] Jan 09 13:50:08 crc kubenswrapper[4919]: I0109 13:50:08.776322 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de460845-14e7-4fa3-bc01-4bc4a40b18df" path="/var/lib/kubelet/pods/de460845-14e7-4fa3-bc01-4bc4a40b18df/volumes" Jan 09 13:50:08 crc kubenswrapper[4919]: I0109 13:50:08.794223 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce1b4cf-5ae0-4131-87c1-d9d55571dae5","Type":"ContainerStarted","Data":"a3d3bb81f443698b53ee66cc4b1ea0259bfcf2a405f1dbe7ef7853c2a010ab15"} Jan 09 13:50:08 crc kubenswrapper[4919]: I0109 13:50:08.797592 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6965595b5-x8vc9" event={"ID":"81007298-01d9-43a2-8e26-33448a1d17e0","Type":"ContainerStarted","Data":"66d1c24940a0833ffb4306467f957bcbc4ad6ff895710c9816e1b628f732334b"} Jan 09 13:50:08 crc kubenswrapper[4919]: I0109 13:50:08.799033 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"938c9f0b-c5af-49f8-9cc2-5e87f688775b","Type":"ContainerStarted","Data":"edccb6c845953c53553bf3a77be58effadc7875518b359ce13dd95194a2f8416"} Jan 09 13:50:08 crc kubenswrapper[4919]: I0109 13:50:08.800060 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-425dh" event={"ID":"93e28fcf-1c97-40cf-bcdc-d63d2af19499","Type":"ContainerStarted","Data":"12e5aaa99e0efa533426908dca731dc7c2f6d465f542b0b2df3509f66e0bccba"} Jan 09 13:50:09 crc kubenswrapper[4919]: I0109 13:50:09.824582 4919 generic.go:334] "Generic (PLEG): container finished" podID="82036277-9b0e-4efd-8da5-9463b9998096" containerID="ee65f0ab8fb0898ae4212abe01ee302241ee9600a1c79e422a7250dada6c5296" exitCode=0 Jan 09 13:50:09 crc kubenswrapper[4919]: I0109 13:50:09.824704 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" event={"ID":"82036277-9b0e-4efd-8da5-9463b9998096","Type":"ContainerDied","Data":"ee65f0ab8fb0898ae4212abe01ee302241ee9600a1c79e422a7250dada6c5296"} Jan 09 13:50:09 crc kubenswrapper[4919]: I0109 13:50:09.843491 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce1b4cf-5ae0-4131-87c1-d9d55571dae5","Type":"ContainerStarted","Data":"065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44"} Jan 09 13:50:09 crc kubenswrapper[4919]: I0109 13:50:09.849801 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8gzn7" event={"ID":"6a6c75da-52dc-426a-95e8-e7d0a0ff3910","Type":"ContainerStarted","Data":"a974e1b0b8b35d347a49d6613ab474f56e61bc73e57bd8b5aa30ed40ee5a2991"} Jan 09 13:50:09 crc kubenswrapper[4919]: I0109 13:50:09.857767 4919 generic.go:334] "Generic (PLEG): container finished" podID="c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" containerID="0ab915abe67601ab0aaeedddc45b829150401a74f13926f91671d416cef7e82d" exitCode=0 Jan 09 13:50:09 crc kubenswrapper[4919]: I0109 13:50:09.857841 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" event={"ID":"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d","Type":"ContainerDied","Data":"0ab915abe67601ab0aaeedddc45b829150401a74f13926f91671d416cef7e82d"} Jan 09 13:50:09 crc kubenswrapper[4919]: I0109 13:50:09.861051 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dr79l" event={"ID":"a26e4dbc-6f44-4723-a81b-7bd05ca1283b","Type":"ContainerStarted","Data":"d95ed877aa23f256c846188c9ee6793f1e6c3399af3395a333cac7b29cc5e94a"} Jan 09 13:50:09 crc kubenswrapper[4919]: I0109 13:50:09.865303 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"938c9f0b-c5af-49f8-9cc2-5e87f688775b","Type":"ContainerStarted","Data":"4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567"} Jan 09 13:50:09 crc kubenswrapper[4919]: I0109 13:50:09.912204 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8gzn7" podStartSLOduration=5.912185344 podStartE2EDuration="5.912185344s" podCreationTimestamp="2026-01-09 13:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:09.865069393 +0000 UTC m=+1189.412908853" watchObservedRunningTime="2026-01-09 13:50:09.912185344 +0000 UTC m=+1189.460024794" Jan 09 13:50:09 crc kubenswrapper[4919]: I0109 13:50:09.932860 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-dr79l" podStartSLOduration=4.932845978 podStartE2EDuration="4.932845978s" podCreationTimestamp="2026-01-09 13:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:09.91041245 +0000 UTC m=+1189.458251900" watchObservedRunningTime="2026-01-09 13:50:09.932845978 +0000 UTC m=+1189.480685428" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.257939 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.331310 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-config\") pod \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.332707 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-nb\") pod \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.333363 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-svc\") pod \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.334644 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9znp\" (UniqueName: \"kubernetes.io/projected/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-kube-api-access-v9znp\") pod \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.334735 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-sb\") pod \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.334834 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-swift-storage-0\") pod \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\" (UID: \"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d\") " Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.362684 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-config" (OuterVolumeSpecName: "config") pod "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" (UID: "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.386182 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-kube-api-access-v9znp" (OuterVolumeSpecName: "kube-api-access-v9znp") pod "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" (UID: "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d"). InnerVolumeSpecName "kube-api-access-v9znp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.394016 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" (UID: "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.396918 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" (UID: "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.416493 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" (UID: "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.416995 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" (UID: "c88a69c9-d9d7-4da7-8f1f-e7e446c8655d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.437026 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.437065 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.437079 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.437091 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9znp\" (UniqueName: \"kubernetes.io/projected/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-kube-api-access-v9znp\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.437102 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.437112 4919 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.891814 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"938c9f0b-c5af-49f8-9cc2-5e87f688775b","Type":"ContainerStarted","Data":"b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2"} Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.891954 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" containerName="glance-log" containerID="cri-o://4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567" gracePeriod=30 Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.891997 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" containerName="glance-httpd" containerID="cri-o://b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2" gracePeriod=30 Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.898514 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" event={"ID":"82036277-9b0e-4efd-8da5-9463b9998096","Type":"ContainerStarted","Data":"51e6c8a8c3e973d2fb93f48fb04960f0d0478dc40a06c0f4a87fe90de4097611"} Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.908447 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" event={"ID":"c88a69c9-d9d7-4da7-8f1f-e7e446c8655d","Type":"ContainerDied","Data":"7085b473fe6b6bd18b8a493b2a037609e6e2f7ee2bdb13553333aad9a1d82a95"} Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.908522 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-6fk5j" Jan 09 13:50:10 crc kubenswrapper[4919]: I0109 13:50:10.908561 4919 scope.go:117] "RemoveContainer" containerID="0ab915abe67601ab0aaeedddc45b829150401a74f13926f91671d416cef7e82d" Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.117994 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-6fk5j"] Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.127307 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-6fk5j"] Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.137968 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.137943154 podStartE2EDuration="6.137943154s" podCreationTimestamp="2026-01-09 13:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:11.121067454 +0000 UTC m=+1190.668906904" watchObservedRunningTime="2026-01-09 13:50:11.137943154 +0000 UTC m=+1190.685782604" Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.897168 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.926853 4919 generic.go:334] "Generic (PLEG): container finished" podID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" containerID="b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2" exitCode=143 Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.926892 4919 generic.go:334] "Generic (PLEG): container finished" podID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" containerID="4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567" exitCode=143 Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.927027 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.927073 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"938c9f0b-c5af-49f8-9cc2-5e87f688775b","Type":"ContainerDied","Data":"b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2"} Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.927135 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"938c9f0b-c5af-49f8-9cc2-5e87f688775b","Type":"ContainerDied","Data":"4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567"} Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.927149 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"938c9f0b-c5af-49f8-9cc2-5e87f688775b","Type":"ContainerDied","Data":"edccb6c845953c53553bf3a77be58effadc7875518b359ce13dd95194a2f8416"} Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.927162 4919 scope.go:117] "RemoveContainer" containerID="b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2" Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.934737 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" containerName="glance-log" containerID="cri-o://065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44" gracePeriod=30 Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.935262 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" containerName="glance-httpd" containerID="cri-o://f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20" gracePeriod=30 Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.935538 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce1b4cf-5ae0-4131-87c1-d9d55571dae5","Type":"ContainerStarted","Data":"f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20"} Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.944691 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.965049 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.965023734 podStartE2EDuration="6.965023734s" podCreationTimestamp="2026-01-09 13:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:11.953727883 +0000 UTC m=+1191.501567333" watchObservedRunningTime="2026-01-09 13:50:11.965023734 +0000 UTC m=+1191.512863204" Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.969651 4919 scope.go:117] "RemoveContainer" containerID="4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567" Jan 09 13:50:11 crc kubenswrapper[4919]: I0109 13:50:11.980412 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" podStartSLOduration=6.980396456 podStartE2EDuration="6.980396456s" podCreationTimestamp="2026-01-09 13:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:11.979162606 +0000 UTC m=+1191.527002046" watchObservedRunningTime="2026-01-09 13:50:11.980396456 +0000 UTC m=+1191.528235896" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.001109 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-httpd-run\") pod \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.001161 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.001220 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-public-tls-certs\") pod \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.001305 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-logs\") pod \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.001416 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-config-data\") pod \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.001432 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-scripts\") pod \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.001456 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-combined-ca-bundle\") pod \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.001474 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm9gx\" (UniqueName: \"kubernetes.io/projected/938c9f0b-c5af-49f8-9cc2-5e87f688775b-kube-api-access-zm9gx\") pod \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\" (UID: \"938c9f0b-c5af-49f8-9cc2-5e87f688775b\") " Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.002689 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "938c9f0b-c5af-49f8-9cc2-5e87f688775b" (UID: "938c9f0b-c5af-49f8-9cc2-5e87f688775b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.007158 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-logs" (OuterVolumeSpecName: "logs") pod "938c9f0b-c5af-49f8-9cc2-5e87f688775b" (UID: "938c9f0b-c5af-49f8-9cc2-5e87f688775b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.011787 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "938c9f0b-c5af-49f8-9cc2-5e87f688775b" (UID: "938c9f0b-c5af-49f8-9cc2-5e87f688775b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.011938 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938c9f0b-c5af-49f8-9cc2-5e87f688775b-kube-api-access-zm9gx" (OuterVolumeSpecName: "kube-api-access-zm9gx") pod "938c9f0b-c5af-49f8-9cc2-5e87f688775b" (UID: "938c9f0b-c5af-49f8-9cc2-5e87f688775b"). InnerVolumeSpecName "kube-api-access-zm9gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.028261 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-scripts" (OuterVolumeSpecName: "scripts") pod "938c9f0b-c5af-49f8-9cc2-5e87f688775b" (UID: "938c9f0b-c5af-49f8-9cc2-5e87f688775b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.030828 4919 scope.go:117] "RemoveContainer" containerID="b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2" Jan 09 13:50:12 crc kubenswrapper[4919]: E0109 13:50:12.031827 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2\": container with ID starting with b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2 not found: ID does not exist" containerID="b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.031865 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2"} err="failed to get container status \"b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2\": rpc error: code = NotFound desc = could not find container \"b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2\": container with ID starting with b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2 not found: ID does not exist" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.031893 4919 scope.go:117] "RemoveContainer" containerID="4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567" Jan 09 13:50:12 crc kubenswrapper[4919]: E0109 13:50:12.034229 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567\": container with ID starting with 4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567 not found: ID does not exist" containerID="4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.034279 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567"} err="failed to get container status \"4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567\": rpc error: code = NotFound desc = could not find container \"4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567\": container with ID starting with 4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567 not found: ID does not exist" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.034310 4919 scope.go:117] "RemoveContainer" containerID="b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.034840 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2"} err="failed to get container status \"b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2\": rpc error: code = NotFound desc = could not find container \"b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2\": container with ID starting with b27a720c335c59b0a9fc95f2501b735a7c47ce89f39395913b6ebd4341e715e2 not found: ID does not exist" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.034867 4919 scope.go:117] "RemoveContainer" containerID="4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.036398 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "938c9f0b-c5af-49f8-9cc2-5e87f688775b" (UID: "938c9f0b-c5af-49f8-9cc2-5e87f688775b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.037856 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567"} err="failed to get container status \"4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567\": rpc error: code = NotFound desc = could not find container \"4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567\": container with ID starting with 4bd9903db9184b0aa44612eb431f89595835a5ce7c84630f9e4d0dcb76248567 not found: ID does not exist" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.079501 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "938c9f0b-c5af-49f8-9cc2-5e87f688775b" (UID: "938c9f0b-c5af-49f8-9cc2-5e87f688775b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.094618 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-config-data" (OuterVolumeSpecName: "config-data") pod "938c9f0b-c5af-49f8-9cc2-5e87f688775b" (UID: "938c9f0b-c5af-49f8-9cc2-5e87f688775b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.106640 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.106695 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.106709 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.106720 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.106738 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm9gx\" (UniqueName: \"kubernetes.io/projected/938c9f0b-c5af-49f8-9cc2-5e87f688775b-kube-api-access-zm9gx\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.106749 4919 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/938c9f0b-c5af-49f8-9cc2-5e87f688775b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.106970 4919 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.106984 4919 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/938c9f0b-c5af-49f8-9cc2-5e87f688775b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.136154 4919 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.209784 4919 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.280388 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.355317 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.402639 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:12 crc kubenswrapper[4919]: E0109 13:50:12.406754 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" containerName="init" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.407083 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" containerName="init" Jan 09 13:50:12 crc kubenswrapper[4919]: E0109 13:50:12.407149 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" containerName="glance-httpd" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.407156 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" containerName="glance-httpd" Jan 09 13:50:12 crc kubenswrapper[4919]: E0109 13:50:12.407178 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" containerName="glance-log" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.407184 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" containerName="glance-log" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.407400 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" containerName="init" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.407420 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" containerName="glance-log" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.407436 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" containerName="glance-httpd" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.408577 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.412117 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.413135 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.432290 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.525630 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-config-data\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.525705 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-scripts\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.525920 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.526075 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.526129 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n7s7\" (UniqueName: \"kubernetes.io/projected/25d27694-1c75-4c18-9d9d-2e766852453f-kube-api-access-6n7s7\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.526156 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-logs\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.526315 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.526647 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.629856 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.633200 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-config-data\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.633240 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-scripts\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.633275 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.633332 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.633352 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n7s7\" (UniqueName: \"kubernetes.io/projected/25d27694-1c75-4c18-9d9d-2e766852453f-kube-api-access-6n7s7\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.633372 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-logs\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.633409 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.633863 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.634194 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.634873 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-logs\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.639953 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.643277 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-config-data\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.644699 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-scripts\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.649120 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.652138 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n7s7\" (UniqueName: \"kubernetes.io/projected/25d27694-1c75-4c18-9d9d-2e766852453f-kube-api-access-6n7s7\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.667754 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.741913 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.792303 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938c9f0b-c5af-49f8-9cc2-5e87f688775b" path="/var/lib/kubelet/pods/938c9f0b-c5af-49f8-9cc2-5e87f688775b/volumes" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.793092 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c88a69c9-d9d7-4da7-8f1f-e7e446c8655d" path="/var/lib/kubelet/pods/c88a69c9-d9d7-4da7-8f1f-e7e446c8655d/volumes" Jan 09 13:50:12 crc kubenswrapper[4919]: I0109 13:50:12.987023 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.011166 4919 generic.go:334] "Generic (PLEG): container finished" podID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" containerID="f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20" exitCode=0 Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.011198 4919 generic.go:334] "Generic (PLEG): container finished" podID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" containerID="065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44" exitCode=143 Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.011310 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce1b4cf-5ae0-4131-87c1-d9d55571dae5","Type":"ContainerDied","Data":"f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20"} Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.011342 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.011364 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce1b4cf-5ae0-4131-87c1-d9d55571dae5","Type":"ContainerDied","Data":"065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44"} Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.011379 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce1b4cf-5ae0-4131-87c1-d9d55571dae5","Type":"ContainerDied","Data":"a3d3bb81f443698b53ee66cc4b1ea0259bfcf2a405f1dbe7ef7853c2a010ab15"} Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.011399 4919 scope.go:117] "RemoveContainer" containerID="f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.058862 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-config-data\") pod \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.058922 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hj5f\" (UniqueName: \"kubernetes.io/projected/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-kube-api-access-2hj5f\") pod \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.059017 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-scripts\") pod \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.059078 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-internal-tls-certs\") pod \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.059124 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-combined-ca-bundle\") pod \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.059281 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-logs\") pod \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.059349 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.059367 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-httpd-run\") pod \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\" (UID: \"fce1b4cf-5ae0-4131-87c1-d9d55571dae5\") " Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.060819 4919 scope.go:117] "RemoveContainer" containerID="065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.061354 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fce1b4cf-5ae0-4131-87c1-d9d55571dae5" (UID: "fce1b4cf-5ae0-4131-87c1-d9d55571dae5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.061568 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-logs" (OuterVolumeSpecName: "logs") pod "fce1b4cf-5ae0-4131-87c1-d9d55571dae5" (UID: "fce1b4cf-5ae0-4131-87c1-d9d55571dae5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.067491 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "fce1b4cf-5ae0-4131-87c1-d9d55571dae5" (UID: "fce1b4cf-5ae0-4131-87c1-d9d55571dae5"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.071186 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-scripts" (OuterVolumeSpecName: "scripts") pod "fce1b4cf-5ae0-4131-87c1-d9d55571dae5" (UID: "fce1b4cf-5ae0-4131-87c1-d9d55571dae5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.082070 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-kube-api-access-2hj5f" (OuterVolumeSpecName: "kube-api-access-2hj5f") pod "fce1b4cf-5ae0-4131-87c1-d9d55571dae5" (UID: "fce1b4cf-5ae0-4131-87c1-d9d55571dae5"). InnerVolumeSpecName "kube-api-access-2hj5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.114597 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fce1b4cf-5ae0-4131-87c1-d9d55571dae5" (UID: "fce1b4cf-5ae0-4131-87c1-d9d55571dae5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.124859 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-config-data" (OuterVolumeSpecName: "config-data") pod "fce1b4cf-5ae0-4131-87c1-d9d55571dae5" (UID: "fce1b4cf-5ae0-4131-87c1-d9d55571dae5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.139452 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fce1b4cf-5ae0-4131-87c1-d9d55571dae5" (UID: "fce1b4cf-5ae0-4131-87c1-d9d55571dae5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.161343 4919 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.161373 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.161387 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.161431 4919 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.161446 4919 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.161457 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.161469 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hj5f\" (UniqueName: \"kubernetes.io/projected/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-kube-api-access-2hj5f\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.161477 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce1b4cf-5ae0-4131-87c1-d9d55571dae5-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.190805 4919 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.263547 4919 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.365243 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.377563 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.391813 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:13 crc kubenswrapper[4919]: E0109 13:50:13.392222 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" containerName="glance-log" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.392233 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" containerName="glance-log" Jan 09 13:50:13 crc kubenswrapper[4919]: E0109 13:50:13.392269 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" containerName="glance-httpd" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.392276 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" containerName="glance-httpd" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.392431 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" containerName="glance-httpd" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.392439 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" containerName="glance-log" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.393309 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.401360 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.431311 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.431570 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.472061 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-logs\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.472142 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kvkj\" (UniqueName: \"kubernetes.io/projected/a09b42c8-eca5-4951-a549-9730a79a7308-kube-api-access-4kvkj\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.472198 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.472236 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.472272 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.472313 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.472343 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.472372 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.574109 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.574425 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.574454 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-logs\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.575692 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kvkj\" (UniqueName: \"kubernetes.io/projected/a09b42c8-eca5-4951-a549-9730a79a7308-kube-api-access-4kvkj\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.575743 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.575761 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.575794 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.575835 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.578348 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-logs\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.578639 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.578734 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.585522 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.585632 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.585843 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.597714 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kvkj\" (UniqueName: \"kubernetes.io/projected/a09b42c8-eca5-4951-a549-9730a79a7308-kube-api-access-4kvkj\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.607566 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.620305 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.620762 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.760858 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.918133 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75f5cb997-6q6lj"] Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.955788 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bdd978ccd-tx6fx"] Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.964196 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.966662 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.980124 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bdd978ccd-tx6fx"] Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.981681 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-scripts\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.981716 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p45p5\" (UniqueName: \"kubernetes.io/projected/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-kube-api-access-p45p5\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.981765 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-tls-certs\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.981821 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-secret-key\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.981849 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-logs\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.981866 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-combined-ca-bundle\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:13 crc kubenswrapper[4919]: I0109 13:50:13.981889 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-config-data\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.033148 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.086611 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-secret-key\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.087108 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-logs\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.088337 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-combined-ca-bundle\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.088499 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-config-data\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.088971 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-scripts\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.089080 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p45p5\" (UniqueName: \"kubernetes.io/projected/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-kube-api-access-p45p5\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.089293 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-tls-certs\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.091872 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-config-data\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.094855 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-scripts\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.095703 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-secret-key\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.097198 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-logs\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.106157 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-combined-ca-bundle\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.116830 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-tls-certs\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.138417 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.144850 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p45p5\" (UniqueName: \"kubernetes.io/projected/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-kube-api-access-p45p5\") pod \"horizon-7bdd978ccd-tx6fx\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.170153 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6965595b5-x8vc9"] Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.184370 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75dd96cc4d-xnspb"] Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.186090 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.189961 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75dd96cc4d-xnspb"] Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.190818 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbh4v\" (UniqueName: \"kubernetes.io/projected/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-kube-api-access-cbh4v\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.190865 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-config-data\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.193024 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-horizon-tls-certs\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.194675 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-logs\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.194725 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-combined-ca-bundle\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.195235 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-scripts\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.195468 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-horizon-secret-key\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.296714 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-scripts\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.296863 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-horizon-secret-key\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.296897 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbh4v\" (UniqueName: \"kubernetes.io/projected/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-kube-api-access-cbh4v\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.296917 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-config-data\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.296969 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-horizon-tls-certs\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.297020 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-logs\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.297045 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-combined-ca-bundle\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.298198 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-scripts\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.299425 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-logs\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.300484 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-config-data\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.301653 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-horizon-secret-key\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.320240 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-horizon-tls-certs\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.321382 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-combined-ca-bundle\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.321974 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbh4v\" (UniqueName: \"kubernetes.io/projected/db2aeda5-21fd-4b61-bb59-d8d0b78884c2-kube-api-access-cbh4v\") pod \"horizon-75dd96cc4d-xnspb\" (UID: \"db2aeda5-21fd-4b61-bb59-d8d0b78884c2\") " pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.336015 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.530782 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:14 crc kubenswrapper[4919]: I0109 13:50:14.763665 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce1b4cf-5ae0-4131-87c1-d9d55571dae5" path="/var/lib/kubelet/pods/fce1b4cf-5ae0-4131-87c1-d9d55571dae5/volumes" Jan 09 13:50:15 crc kubenswrapper[4919]: I0109 13:50:15.080644 4919 generic.go:334] "Generic (PLEG): container finished" podID="6a6c75da-52dc-426a-95e8-e7d0a0ff3910" containerID="a974e1b0b8b35d347a49d6613ab474f56e61bc73e57bd8b5aa30ed40ee5a2991" exitCode=0 Jan 09 13:50:15 crc kubenswrapper[4919]: I0109 13:50:15.080688 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8gzn7" event={"ID":"6a6c75da-52dc-426a-95e8-e7d0a0ff3910","Type":"ContainerDied","Data":"a974e1b0b8b35d347a49d6613ab474f56e61bc73e57bd8b5aa30ed40ee5a2991"} Jan 09 13:50:15 crc kubenswrapper[4919]: I0109 13:50:15.910356 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:15 crc kubenswrapper[4919]: I0109 13:50:15.969880 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vrffj"] Jan 09 13:50:15 crc kubenswrapper[4919]: I0109 13:50:15.970114 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8db84466c-vrffj" podUID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerName="dnsmasq-dns" containerID="cri-o://ecadf01ef5df33a2077f08ed4448fff95ed133c21cd7cbb4d46eaf8cab7207be" gracePeriod=10 Jan 09 13:50:17 crc kubenswrapper[4919]: I0109 13:50:17.100381 4919 generic.go:334] "Generic (PLEG): container finished" podID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerID="ecadf01ef5df33a2077f08ed4448fff95ed133c21cd7cbb4d46eaf8cab7207be" exitCode=0 Jan 09 13:50:17 crc kubenswrapper[4919]: I0109 13:50:17.100450 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-vrffj" event={"ID":"25ed4749-2919-40f3-a657-04e4b8b0cd84","Type":"ContainerDied","Data":"ecadf01ef5df33a2077f08ed4448fff95ed133c21cd7cbb4d46eaf8cab7207be"} Jan 09 13:50:20 crc kubenswrapper[4919]: I0109 13:50:20.003280 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-vrffj" podUID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 09 13:50:21 crc kubenswrapper[4919]: I0109 13:50:21.246503 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:50:21 crc kubenswrapper[4919]: I0109 13:50:21.246562 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:50:21 crc kubenswrapper[4919]: I0109 13:50:21.246610 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:50:21 crc kubenswrapper[4919]: I0109 13:50:21.247292 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c739bd50573e0da995d79681df6e33456878c7cb345ea26ee42a16e540a49209"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 13:50:21 crc kubenswrapper[4919]: I0109 13:50:21.247344 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://c739bd50573e0da995d79681df6e33456878c7cb345ea26ee42a16e540a49209" gracePeriod=600 Jan 09 13:50:22 crc kubenswrapper[4919]: I0109 13:50:22.141306 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="c739bd50573e0da995d79681df6e33456878c7cb345ea26ee42a16e540a49209" exitCode=0 Jan 09 13:50:22 crc kubenswrapper[4919]: I0109 13:50:22.141374 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"c739bd50573e0da995d79681df6e33456878c7cb345ea26ee42a16e540a49209"} Jan 09 13:50:25 crc kubenswrapper[4919]: I0109 13:50:25.004054 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-vrffj" podUID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 09 13:50:26 crc kubenswrapper[4919]: E0109 13:50:26.997952 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b" Jan 09 13:50:26 crc kubenswrapper[4919]: E0109 13:50:26.998602 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njwpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-425dh_openstack(93e28fcf-1c97-40cf-bcdc-d63d2af19499): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:50:27 crc kubenswrapper[4919]: E0109 13:50:27.001485 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-425dh" podUID="93e28fcf-1c97-40cf-bcdc-d63d2af19499" Jan 09 13:50:27 crc kubenswrapper[4919]: E0109 13:50:27.190244 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b\\\"\"" pod="openstack/placement-db-sync-425dh" podUID="93e28fcf-1c97-40cf-bcdc-d63d2af19499" Jan 09 13:50:29 crc kubenswrapper[4919]: W0109 13:50:29.690043 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25d27694_1c75_4c18_9d9d_2e766852453f.slice/crio-0957de93b7dc9f13d0e4feea3421bbe6abc65147216b41c039a69aab46b6cc9f WatchSource:0}: Error finding container 0957de93b7dc9f13d0e4feea3421bbe6abc65147216b41c039a69aab46b6cc9f: Status 404 returned error can't find the container with id 0957de93b7dc9f13d0e4feea3421bbe6abc65147216b41c039a69aab46b6cc9f Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.792872 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.892684 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v747z\" (UniqueName: \"kubernetes.io/projected/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-kube-api-access-v747z\") pod \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.892750 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-fernet-keys\") pod \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.892771 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-config-data\") pod \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.893470 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-credential-keys\") pod \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.893509 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-combined-ca-bundle\") pod \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.893582 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-scripts\") pod \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\" (UID: \"6a6c75da-52dc-426a-95e8-e7d0a0ff3910\") " Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.905922 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6a6c75da-52dc-426a-95e8-e7d0a0ff3910" (UID: "6a6c75da-52dc-426a-95e8-e7d0a0ff3910"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.906568 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-kube-api-access-v747z" (OuterVolumeSpecName: "kube-api-access-v747z") pod "6a6c75da-52dc-426a-95e8-e7d0a0ff3910" (UID: "6a6c75da-52dc-426a-95e8-e7d0a0ff3910"). InnerVolumeSpecName "kube-api-access-v747z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.908232 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-scripts" (OuterVolumeSpecName: "scripts") pod "6a6c75da-52dc-426a-95e8-e7d0a0ff3910" (UID: "6a6c75da-52dc-426a-95e8-e7d0a0ff3910"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.911815 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6a6c75da-52dc-426a-95e8-e7d0a0ff3910" (UID: "6a6c75da-52dc-426a-95e8-e7d0a0ff3910"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.932188 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a6c75da-52dc-426a-95e8-e7d0a0ff3910" (UID: "6a6c75da-52dc-426a-95e8-e7d0a0ff3910"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.933562 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-config-data" (OuterVolumeSpecName: "config-data") pod "6a6c75da-52dc-426a-95e8-e7d0a0ff3910" (UID: "6a6c75da-52dc-426a-95e8-e7d0a0ff3910"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.995727 4919 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.995759 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.995768 4919 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.995777 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.995786 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:29 crc kubenswrapper[4919]: I0109 13:50:29.995796 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v747z\" (UniqueName: \"kubernetes.io/projected/6a6c75da-52dc-426a-95e8-e7d0a0ff3910-kube-api-access-v747z\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:30 crc kubenswrapper[4919]: I0109 13:50:30.213343 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8gzn7" event={"ID":"6a6c75da-52dc-426a-95e8-e7d0a0ff3910","Type":"ContainerDied","Data":"7e643b6da4ef32701befc9e24432fa3f7e028ae9c9cd5891e1c2ac67fe07bd61"} Jan 09 13:50:30 crc kubenswrapper[4919]: I0109 13:50:30.213388 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8gzn7" Jan 09 13:50:30 crc kubenswrapper[4919]: I0109 13:50:30.213395 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e643b6da4ef32701befc9e24432fa3f7e028ae9c9cd5891e1c2ac67fe07bd61" Jan 09 13:50:30 crc kubenswrapper[4919]: I0109 13:50:30.216882 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d27694-1c75-4c18-9d9d-2e766852453f","Type":"ContainerStarted","Data":"0957de93b7dc9f13d0e4feea3421bbe6abc65147216b41c039a69aab46b6cc9f"} Jan 09 13:50:30 crc kubenswrapper[4919]: I0109 13:50:30.886900 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8gzn7"] Jan 09 13:50:30 crc kubenswrapper[4919]: I0109 13:50:30.898254 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8gzn7"] Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.036972 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qvmd7"] Jan 09 13:50:31 crc kubenswrapper[4919]: E0109 13:50:31.037494 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6c75da-52dc-426a-95e8-e7d0a0ff3910" containerName="keystone-bootstrap" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.037519 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6c75da-52dc-426a-95e8-e7d0a0ff3910" containerName="keystone-bootstrap" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.037737 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6c75da-52dc-426a-95e8-e7d0a0ff3910" containerName="keystone-bootstrap" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.038763 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.040869 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.041974 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.042783 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.044554 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.050519 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qvmd7"] Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.053833 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7w5b5" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.116472 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-combined-ca-bundle\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.116523 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw429\" (UniqueName: \"kubernetes.io/projected/fd2e6850-0b12-460b-9da8-56a74f4324f3-kube-api-access-gw429\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.116729 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-fernet-keys\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.117034 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-config-data\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.117245 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-scripts\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.117400 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-credential-keys\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.219982 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-combined-ca-bundle\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.220120 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw429\" (UniqueName: \"kubernetes.io/projected/fd2e6850-0b12-460b-9da8-56a74f4324f3-kube-api-access-gw429\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.221089 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-fernet-keys\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.221248 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-config-data\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.221349 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-scripts\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.221446 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-credential-keys\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.227791 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-combined-ca-bundle\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.227975 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-fernet-keys\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.229840 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-config-data\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.230409 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-scripts\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.236785 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw429\" (UniqueName: \"kubernetes.io/projected/fd2e6850-0b12-460b-9da8-56a74f4324f3-kube-api-access-gw429\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.247515 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-credential-keys\") pod \"keystone-bootstrap-qvmd7\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:31 crc kubenswrapper[4919]: I0109 13:50:31.359011 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:32 crc kubenswrapper[4919]: I0109 13:50:32.764976 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a6c75da-52dc-426a-95e8-e7d0a0ff3910" path="/var/lib/kubelet/pods/6a6c75da-52dc-426a-95e8-e7d0a0ff3910/volumes" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.254684 4919 generic.go:334] "Generic (PLEG): container finished" podID="a26e4dbc-6f44-4723-a81b-7bd05ca1283b" containerID="d95ed877aa23f256c846188c9ee6793f1e6c3399af3395a333cac7b29cc5e94a" exitCode=0 Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.254778 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dr79l" event={"ID":"a26e4dbc-6f44-4723-a81b-7bd05ca1283b","Type":"ContainerDied","Data":"d95ed877aa23f256c846188c9ee6793f1e6c3399af3395a333cac7b29cc5e94a"} Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.519041 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.519442 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7h698h565h5fch57fh5bch8chf7h545h5f4h595h657hd4h5ddh78h557h68dh556h574h58ch54h55h566h64dh594h5cch5fdh569h567h69h64h7bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktdqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-75f5cb997-6q6lj_openstack(94e437ba-f67c-41cb-887b-a1d977b041f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.541703 4919 scope.go:117] "RemoveContainer" containerID="f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20" Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.541761 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-75f5cb997-6q6lj" podUID="94e437ba-f67c-41cb-887b-a1d977b041f8" Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.542151 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20\": container with ID starting with f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20 not found: ID does not exist" containerID="f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.542178 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20"} err="failed to get container status \"f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20\": rpc error: code = NotFound desc = could not find container \"f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20\": container with ID starting with f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20 not found: ID does not exist" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.542202 4919 scope.go:117] "RemoveContainer" containerID="065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44" Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.542508 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44\": container with ID starting with 065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44 not found: ID does not exist" containerID="065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.542530 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44"} err="failed to get container status \"065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44\": rpc error: code = NotFound desc = could not find container \"065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44\": container with ID starting with 065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44 not found: ID does not exist" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.542543 4919 scope.go:117] "RemoveContainer" containerID="f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.542761 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20"} err="failed to get container status \"f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20\": rpc error: code = NotFound desc = could not find container \"f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20\": container with ID starting with f1d9113e983de8602253318be0b36bb920d24c0c459d97ce17ebc15bbf75bc20 not found: ID does not exist" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.542780 4919 scope.go:117] "RemoveContainer" containerID="065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.542986 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44"} err="failed to get container status \"065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44\": rpc error: code = NotFound desc = could not find container \"065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44\": container with ID starting with 065d6c3c24be8e38b4c474202f296b29dfec49756b8caa98296e2a62c4be5f44 not found: ID does not exist" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.543000 4919 scope.go:117] "RemoveContainer" containerID="13e0d2bed4a1518fec6fb07c1bdfa49ee9c21e3a9f0774ed8f0f599b03f0f58f" Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.569597 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.569778 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8fh596h67fhc5h668h94hcfh58fh595h5d8h557hc4h8fh66dh57hfdh5dch66ch79h7h96h669h5fch595h568h5f4h574h688h694h558hd8hb7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcjkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6965595b5-x8vc9_openstack(81007298-01d9-43a2-8e26-33448a1d17e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.572381 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-6965595b5-x8vc9" podUID="81007298-01d9-43a2-8e26-33448a1d17e0" Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.573380 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7" Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.573525 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67fh646h59fh665h58dh585h587hc4h97hc9h678h8ch5b4h57dh5dh98hd7h649h578h5c7h59fh699h54h5d4h59dh575h9chf8hb8h5cch667h688q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zghcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-67b8c5bf6f-wsqfv_openstack(da695d3c-0710-4113-ad5c-6168aa3bbe2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:50:34 crc kubenswrapper[4919]: E0109 13:50:34.577521 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7\\\"\"]" pod="openstack/horizon-67b8c5bf6f-wsqfv" podUID="da695d3c-0710-4113-ad5c-6168aa3bbe2b" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.635757 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.789677 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-sb\") pod \"25ed4749-2919-40f3-a657-04e4b8b0cd84\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.789767 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-nb\") pod \"25ed4749-2919-40f3-a657-04e4b8b0cd84\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.789876 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-svc\") pod \"25ed4749-2919-40f3-a657-04e4b8b0cd84\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.789999 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-swift-storage-0\") pod \"25ed4749-2919-40f3-a657-04e4b8b0cd84\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.790045 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqqf6\" (UniqueName: \"kubernetes.io/projected/25ed4749-2919-40f3-a657-04e4b8b0cd84-kube-api-access-gqqf6\") pod \"25ed4749-2919-40f3-a657-04e4b8b0cd84\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.790075 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-config\") pod \"25ed4749-2919-40f3-a657-04e4b8b0cd84\" (UID: \"25ed4749-2919-40f3-a657-04e4b8b0cd84\") " Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.797086 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ed4749-2919-40f3-a657-04e4b8b0cd84-kube-api-access-gqqf6" (OuterVolumeSpecName: "kube-api-access-gqqf6") pod "25ed4749-2919-40f3-a657-04e4b8b0cd84" (UID: "25ed4749-2919-40f3-a657-04e4b8b0cd84"). InnerVolumeSpecName "kube-api-access-gqqf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.833578 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "25ed4749-2919-40f3-a657-04e4b8b0cd84" (UID: "25ed4749-2919-40f3-a657-04e4b8b0cd84"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.834477 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25ed4749-2919-40f3-a657-04e4b8b0cd84" (UID: "25ed4749-2919-40f3-a657-04e4b8b0cd84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.837864 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25ed4749-2919-40f3-a657-04e4b8b0cd84" (UID: "25ed4749-2919-40f3-a657-04e4b8b0cd84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.847376 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-config" (OuterVolumeSpecName: "config") pod "25ed4749-2919-40f3-a657-04e4b8b0cd84" (UID: "25ed4749-2919-40f3-a657-04e4b8b0cd84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:34 crc kubenswrapper[4919]: I0109 13:50:34.850581 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25ed4749-2919-40f3-a657-04e4b8b0cd84" (UID: "25ed4749-2919-40f3-a657-04e4b8b0cd84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.415039 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-vrffj" podUID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.415129 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.423110 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.423136 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.423149 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.423342 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.423360 4919 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25ed4749-2919-40f3-a657-04e4b8b0cd84-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.423371 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqqf6\" (UniqueName: \"kubernetes.io/projected/25ed4749-2919-40f3-a657-04e4b8b0cd84-kube-api-access-gqqf6\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.441761 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-vrffj" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.441788 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-vrffj" event={"ID":"25ed4749-2919-40f3-a657-04e4b8b0cd84","Type":"ContainerDied","Data":"bb9ccc25886262c63e9b18526cea6064f6a88939cf5accd426fb3e30c7965697"} Jan 09 13:50:35 crc kubenswrapper[4919]: E0109 13:50:35.506373 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 09 13:50:35 crc kubenswrapper[4919]: E0109 13:50:35.506560 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-982gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-9sb8m_openstack(bec76c49-6c38-4168-ac7b-087460106d25): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:50:35 crc kubenswrapper[4919]: E0109 13:50:35.507746 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-9sb8m" podUID="bec76c49-6c38-4168-ac7b-087460106d25" Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.542426 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vrffj"] Jan 09 13:50:35 crc kubenswrapper[4919]: I0109 13:50:35.568906 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vrffj"] Jan 09 13:50:36 crc kubenswrapper[4919]: E0109 13:50:36.454393 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-9sb8m" podUID="bec76c49-6c38-4168-ac7b-087460106d25" Jan 09 13:50:36 crc kubenswrapper[4919]: E0109 13:50:36.687332 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 09 13:50:36 crc kubenswrapper[4919]: E0109 13:50:36.687515 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9rc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vz5pd_openstack(0a9f81fc-067d-404d-b104-bba333d3911a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 13:50:36 crc kubenswrapper[4919]: E0109 13:50:36.688729 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vz5pd" podUID="0a9f81fc-067d-404d-b104-bba333d3911a" Jan 09 13:50:36 crc kubenswrapper[4919]: I0109 13:50:36.767831 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ed4749-2919-40f3-a657-04e4b8b0cd84" path="/var/lib/kubelet/pods/25ed4749-2919-40f3-a657-04e4b8b0cd84/volumes" Jan 09 13:50:36 crc kubenswrapper[4919]: I0109 13:50:36.785573 4919 scope.go:117] "RemoveContainer" containerID="ecadf01ef5df33a2077f08ed4448fff95ed133c21cd7cbb4d46eaf8cab7207be" Jan 09 13:50:36 crc kubenswrapper[4919]: I0109 13:50:36.941676 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:36 crc kubenswrapper[4919]: I0109 13:50:36.956679 4919 scope.go:117] "RemoveContainer" containerID="c8b9fa4b4f7a0e9b3f58cee458b407a2d96051c3ac1f4524214464f0215ec602" Jan 09 13:50:36 crc kubenswrapper[4919]: I0109 13:50:36.958653 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:36 crc kubenswrapper[4919]: I0109 13:50:36.985033 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.056183 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zghcz\" (UniqueName: \"kubernetes.io/projected/da695d3c-0710-4113-ad5c-6168aa3bbe2b-kube-api-access-zghcz\") pod \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.056422 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcjkw\" (UniqueName: \"kubernetes.io/projected/81007298-01d9-43a2-8e26-33448a1d17e0-kube-api-access-vcjkw\") pod \"81007298-01d9-43a2-8e26-33448a1d17e0\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.056545 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/da695d3c-0710-4113-ad5c-6168aa3bbe2b-horizon-secret-key\") pod \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.056682 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da695d3c-0710-4113-ad5c-6168aa3bbe2b-logs\") pod \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.056789 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81007298-01d9-43a2-8e26-33448a1d17e0-horizon-secret-key\") pod \"81007298-01d9-43a2-8e26-33448a1d17e0\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.057270 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da695d3c-0710-4113-ad5c-6168aa3bbe2b-logs" (OuterVolumeSpecName: "logs") pod "da695d3c-0710-4113-ad5c-6168aa3bbe2b" (UID: "da695d3c-0710-4113-ad5c-6168aa3bbe2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.057291 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81007298-01d9-43a2-8e26-33448a1d17e0-logs" (OuterVolumeSpecName: "logs") pod "81007298-01d9-43a2-8e26-33448a1d17e0" (UID: "81007298-01d9-43a2-8e26-33448a1d17e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.057683 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81007298-01d9-43a2-8e26-33448a1d17e0-logs\") pod \"81007298-01d9-43a2-8e26-33448a1d17e0\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.057829 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-scripts\") pod \"81007298-01d9-43a2-8e26-33448a1d17e0\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.058010 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-config-data\") pod \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.058161 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-scripts" (OuterVolumeSpecName: "scripts") pod "81007298-01d9-43a2-8e26-33448a1d17e0" (UID: "81007298-01d9-43a2-8e26-33448a1d17e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.058174 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-config-data\") pod \"81007298-01d9-43a2-8e26-33448a1d17e0\" (UID: \"81007298-01d9-43a2-8e26-33448a1d17e0\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.058403 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-scripts\") pod \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\" (UID: \"da695d3c-0710-4113-ad5c-6168aa3bbe2b\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.058744 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-config-data" (OuterVolumeSpecName: "config-data") pod "da695d3c-0710-4113-ad5c-6168aa3bbe2b" (UID: "da695d3c-0710-4113-ad5c-6168aa3bbe2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.058767 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-config-data" (OuterVolumeSpecName: "config-data") pod "81007298-01d9-43a2-8e26-33448a1d17e0" (UID: "81007298-01d9-43a2-8e26-33448a1d17e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.059090 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-scripts" (OuterVolumeSpecName: "scripts") pod "da695d3c-0710-4113-ad5c-6168aa3bbe2b" (UID: "da695d3c-0710-4113-ad5c-6168aa3bbe2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.059542 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da695d3c-0710-4113-ad5c-6168aa3bbe2b-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.059678 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81007298-01d9-43a2-8e26-33448a1d17e0-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.059768 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.059852 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.059959 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81007298-01d9-43a2-8e26-33448a1d17e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.060205 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da695d3c-0710-4113-ad5c-6168aa3bbe2b-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.063155 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da695d3c-0710-4113-ad5c-6168aa3bbe2b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "da695d3c-0710-4113-ad5c-6168aa3bbe2b" (UID: "da695d3c-0710-4113-ad5c-6168aa3bbe2b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.064150 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81007298-01d9-43a2-8e26-33448a1d17e0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "81007298-01d9-43a2-8e26-33448a1d17e0" (UID: "81007298-01d9-43a2-8e26-33448a1d17e0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.065043 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da695d3c-0710-4113-ad5c-6168aa3bbe2b-kube-api-access-zghcz" (OuterVolumeSpecName: "kube-api-access-zghcz") pod "da695d3c-0710-4113-ad5c-6168aa3bbe2b" (UID: "da695d3c-0710-4113-ad5c-6168aa3bbe2b"). InnerVolumeSpecName "kube-api-access-zghcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.065089 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81007298-01d9-43a2-8e26-33448a1d17e0-kube-api-access-vcjkw" (OuterVolumeSpecName: "kube-api-access-vcjkw") pod "81007298-01d9-43a2-8e26-33448a1d17e0" (UID: "81007298-01d9-43a2-8e26-33448a1d17e0"). InnerVolumeSpecName "kube-api-access-vcjkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.074890 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.161028 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-scripts\") pod \"94e437ba-f67c-41cb-887b-a1d977b041f8\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.161171 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94e437ba-f67c-41cb-887b-a1d977b041f8-logs\") pod \"94e437ba-f67c-41cb-887b-a1d977b041f8\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.161231 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-config-data\") pod \"94e437ba-f67c-41cb-887b-a1d977b041f8\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.161379 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94e437ba-f67c-41cb-887b-a1d977b041f8-horizon-secret-key\") pod \"94e437ba-f67c-41cb-887b-a1d977b041f8\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.161460 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktdqj\" (UniqueName: \"kubernetes.io/projected/94e437ba-f67c-41cb-887b-a1d977b041f8-kube-api-access-ktdqj\") pod \"94e437ba-f67c-41cb-887b-a1d977b041f8\" (UID: \"94e437ba-f67c-41cb-887b-a1d977b041f8\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.161636 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-scripts" (OuterVolumeSpecName: "scripts") pod "94e437ba-f67c-41cb-887b-a1d977b041f8" (UID: "94e437ba-f67c-41cb-887b-a1d977b041f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.161902 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94e437ba-f67c-41cb-887b-a1d977b041f8-logs" (OuterVolumeSpecName: "logs") pod "94e437ba-f67c-41cb-887b-a1d977b041f8" (UID: "94e437ba-f67c-41cb-887b-a1d977b041f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.162180 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-config-data" (OuterVolumeSpecName: "config-data") pod "94e437ba-f67c-41cb-887b-a1d977b041f8" (UID: "94e437ba-f67c-41cb-887b-a1d977b041f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.162496 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zghcz\" (UniqueName: \"kubernetes.io/projected/da695d3c-0710-4113-ad5c-6168aa3bbe2b-kube-api-access-zghcz\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.162527 4919 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/da695d3c-0710-4113-ad5c-6168aa3bbe2b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.162543 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcjkw\" (UniqueName: \"kubernetes.io/projected/81007298-01d9-43a2-8e26-33448a1d17e0-kube-api-access-vcjkw\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.162557 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.162570 4919 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81007298-01d9-43a2-8e26-33448a1d17e0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.162583 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94e437ba-f67c-41cb-887b-a1d977b041f8-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.162595 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94e437ba-f67c-41cb-887b-a1d977b041f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.164838 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e437ba-f67c-41cb-887b-a1d977b041f8-kube-api-access-ktdqj" (OuterVolumeSpecName: "kube-api-access-ktdqj") pod "94e437ba-f67c-41cb-887b-a1d977b041f8" (UID: "94e437ba-f67c-41cb-887b-a1d977b041f8"). InnerVolumeSpecName "kube-api-access-ktdqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.165233 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94e437ba-f67c-41cb-887b-a1d977b041f8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "94e437ba-f67c-41cb-887b-a1d977b041f8" (UID: "94e437ba-f67c-41cb-887b-a1d977b041f8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.229725 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75dd96cc4d-xnspb"] Jan 09 13:50:37 crc kubenswrapper[4919]: W0109 13:50:37.245372 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb2aeda5_21fd_4b61_bb59_d8d0b78884c2.slice/crio-037a6cda85835fc048c6d85c8c56a97e8f39600ceb3c9d661940c25db77a6883 WatchSource:0}: Error finding container 037a6cda85835fc048c6d85c8c56a97e8f39600ceb3c9d661940c25db77a6883: Status 404 returned error can't find the container with id 037a6cda85835fc048c6d85c8c56a97e8f39600ceb3c9d661940c25db77a6883 Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.263502 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-config\") pod \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.264030 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47vzs\" (UniqueName: \"kubernetes.io/projected/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-kube-api-access-47vzs\") pod \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.264150 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-combined-ca-bundle\") pod \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\" (UID: \"a26e4dbc-6f44-4723-a81b-7bd05ca1283b\") " Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.264812 4919 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94e437ba-f67c-41cb-887b-a1d977b041f8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.264827 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktdqj\" (UniqueName: \"kubernetes.io/projected/94e437ba-f67c-41cb-887b-a1d977b041f8-kube-api-access-ktdqj\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.269833 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-kube-api-access-47vzs" (OuterVolumeSpecName: "kube-api-access-47vzs") pod "a26e4dbc-6f44-4723-a81b-7bd05ca1283b" (UID: "a26e4dbc-6f44-4723-a81b-7bd05ca1283b"). InnerVolumeSpecName "kube-api-access-47vzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.290112 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a26e4dbc-6f44-4723-a81b-7bd05ca1283b" (UID: "a26e4dbc-6f44-4723-a81b-7bd05ca1283b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.291995 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-config" (OuterVolumeSpecName: "config") pod "a26e4dbc-6f44-4723-a81b-7bd05ca1283b" (UID: "a26e4dbc-6f44-4723-a81b-7bd05ca1283b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.314076 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.386177 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.386244 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.386258 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47vzs\" (UniqueName: \"kubernetes.io/projected/a26e4dbc-6f44-4723-a81b-7bd05ca1283b-kube-api-access-47vzs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.430264 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qvmd7"] Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.438328 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bdd978ccd-tx6fx"] Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.460610 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6965595b5-x8vc9" event={"ID":"81007298-01d9-43a2-8e26-33448a1d17e0","Type":"ContainerDied","Data":"66d1c24940a0833ffb4306467f957bcbc4ad6ff895710c9816e1b628f732334b"} Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.460641 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6965595b5-x8vc9" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.463200 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dd96cc4d-xnspb" event={"ID":"db2aeda5-21fd-4b61-bb59-d8d0b78884c2","Type":"ContainerStarted","Data":"037a6cda85835fc048c6d85c8c56a97e8f39600ceb3c9d661940c25db77a6883"} Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.466416 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75f5cb997-6q6lj" event={"ID":"94e437ba-f67c-41cb-887b-a1d977b041f8","Type":"ContainerDied","Data":"c801586b8625963af6cc0c75bf1cd0d8e350ba90afc915d952b22e171ebc2814"} Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.466433 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75f5cb997-6q6lj" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.474730 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"af3cae1993f8443bd098aec195067f6b6771b2ac3e2a3073412d7f8ae6da618e"} Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.480502 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67b8c5bf6f-wsqfv" event={"ID":"da695d3c-0710-4113-ad5c-6168aa3bbe2b","Type":"ContainerDied","Data":"55e704a6767373a0bdbe3a2d11e2adb20d59a3bc76fd1fab4f2be9a1c1e31895"} Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.480637 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67b8c5bf6f-wsqfv" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.493803 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2","Type":"ContainerStarted","Data":"bf9b7f9a1d727c6b93dc2c2db21aad00674c0c5e4b9f563d3bec4ed53f66dab4"} Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.496254 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdd978ccd-tx6fx" event={"ID":"158e1b10-ad5e-4a44-a3be-630a2d45bfdc","Type":"ContainerStarted","Data":"d2ad3ec5faeacbb4096485b8e60aaf5e2eebbfd348c48bca815480941d61b092"} Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.497496 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a09b42c8-eca5-4951-a549-9730a79a7308","Type":"ContainerStarted","Data":"4c84639263e9f4e93e85d9e70943193712f2bfe3faf9d207245dfbc541614041"} Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.499329 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dr79l" event={"ID":"a26e4dbc-6f44-4723-a81b-7bd05ca1283b","Type":"ContainerDied","Data":"e5f13f53cccc56b9ae0aa631d93ee9417ee43d28fa55ba3aa2fc3e535ad82c29"} Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.499348 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dr79l" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.499365 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5f13f53cccc56b9ae0aa631d93ee9417ee43d28fa55ba3aa2fc3e535ad82c29" Jan 09 13:50:37 crc kubenswrapper[4919]: E0109 13:50:37.503700 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-vz5pd" podUID="0a9f81fc-067d-404d-b104-bba333d3911a" Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.555836 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75f5cb997-6q6lj"] Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.575542 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75f5cb997-6q6lj"] Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.600365 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6965595b5-x8vc9"] Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.633893 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6965595b5-x8vc9"] Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.661680 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67b8c5bf6f-wsqfv"] Jan 09 13:50:37 crc kubenswrapper[4919]: I0109 13:50:37.672121 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67b8c5bf6f-wsqfv"] Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.327661 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685444497c-xp6jw"] Jan 09 13:50:38 crc kubenswrapper[4919]: E0109 13:50:38.328583 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26e4dbc-6f44-4723-a81b-7bd05ca1283b" containerName="neutron-db-sync" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.328597 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26e4dbc-6f44-4723-a81b-7bd05ca1283b" containerName="neutron-db-sync" Jan 09 13:50:38 crc kubenswrapper[4919]: E0109 13:50:38.328608 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerName="init" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.328614 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerName="init" Jan 09 13:50:38 crc kubenswrapper[4919]: E0109 13:50:38.328638 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerName="dnsmasq-dns" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.328645 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerName="dnsmasq-dns" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.328818 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="a26e4dbc-6f44-4723-a81b-7bd05ca1283b" containerName="neutron-db-sync" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.328834 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ed4749-2919-40f3-a657-04e4b8b0cd84" containerName="dnsmasq-dns" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.330989 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.350931 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-xp6jw"] Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.409582 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.409623 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-svc\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.409698 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.409730 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.409762 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-config\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.409781 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7lkn\" (UniqueName: \"kubernetes.io/projected/4702a56c-301a-472f-b539-aa0873b1bdd1-kube-api-access-x7lkn\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.437226 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cb99dd7c6-gp5c6"] Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.439140 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.445747 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.446102 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.446241 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.446362 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4hvlp" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.470242 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cb99dd7c6-gp5c6"] Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.512906 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.512943 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-ovndb-tls-certs\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.512969 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-svc\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.513014 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.513040 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.513059 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b8dl\" (UniqueName: \"kubernetes.io/projected/1c985555-77df-4e8b-a2b0-f1127eab2886-kube-api-access-9b8dl\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.513088 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-config\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.513108 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7lkn\" (UniqueName: \"kubernetes.io/projected/4702a56c-301a-472f-b539-aa0873b1bdd1-kube-api-access-x7lkn\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.513159 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-config\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.513185 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-combined-ca-bundle\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.513223 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-httpd-config\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.514203 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.515027 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-svc\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.515529 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.516224 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.516739 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-config\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.520571 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d27694-1c75-4c18-9d9d-2e766852453f","Type":"ContainerStarted","Data":"fb167accce72e04b9a09108d1a8ad2bc8cf774141de4b62410f3cae5d517b80b"} Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.520617 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d27694-1c75-4c18-9d9d-2e766852453f","Type":"ContainerStarted","Data":"5a5a7f74341bf0f178ad69e20b9a6ee8729d2f84d5def0b42e3e3148a03624b2"} Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.520742 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="25d27694-1c75-4c18-9d9d-2e766852453f" containerName="glance-log" containerID="cri-o://5a5a7f74341bf0f178ad69e20b9a6ee8729d2f84d5def0b42e3e3148a03624b2" gracePeriod=30 Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.521199 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="25d27694-1c75-4c18-9d9d-2e766852453f" containerName="glance-httpd" containerID="cri-o://fb167accce72e04b9a09108d1a8ad2bc8cf774141de4b62410f3cae5d517b80b" gracePeriod=30 Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.528423 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a09b42c8-eca5-4951-a549-9730a79a7308","Type":"ContainerStarted","Data":"93d24260279509ec55196c6a19f8e8819dbe60b58d5ae5e97ba98a102039680b"} Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.540163 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7lkn\" (UniqueName: \"kubernetes.io/projected/4702a56c-301a-472f-b539-aa0873b1bdd1-kube-api-access-x7lkn\") pod \"dnsmasq-dns-685444497c-xp6jw\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.544433 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dd96cc4d-xnspb" event={"ID":"db2aeda5-21fd-4b61-bb59-d8d0b78884c2","Type":"ContainerStarted","Data":"414d9e8ed643914cf3d1746b244b43b220cfa8ae2c0f1072a3fecc8c024da663"} Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.553753 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=26.55373441 podStartE2EDuration="26.55373441s" podCreationTimestamp="2026-01-09 13:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:38.552376216 +0000 UTC m=+1218.100215666" watchObservedRunningTime="2026-01-09 13:50:38.55373441 +0000 UTC m=+1218.101573860" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.570029 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qvmd7" event={"ID":"fd2e6850-0b12-460b-9da8-56a74f4324f3","Type":"ContainerStarted","Data":"b0ef61d3089ead87370e9d64df135e8ce258018b5271a18bbbf3ff9b807454b1"} Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.570060 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qvmd7" event={"ID":"fd2e6850-0b12-460b-9da8-56a74f4324f3","Type":"ContainerStarted","Data":"d14ceabb256c9b3740a267550fcecee03a7727fc28fec8207b012ab3266ce368"} Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.601952 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qvmd7" podStartSLOduration=7.601936548 podStartE2EDuration="7.601936548s" podCreationTimestamp="2026-01-09 13:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:38.594457262 +0000 UTC m=+1218.142296702" watchObservedRunningTime="2026-01-09 13:50:38.601936548 +0000 UTC m=+1218.149775988" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.614566 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-config\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.614632 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-combined-ca-bundle\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.614672 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-httpd-config\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.614732 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-ovndb-tls-certs\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.614811 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b8dl\" (UniqueName: \"kubernetes.io/projected/1c985555-77df-4e8b-a2b0-f1127eab2886-kube-api-access-9b8dl\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.622272 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-config\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.628141 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-httpd-config\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.628562 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-ovndb-tls-certs\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.633109 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-combined-ca-bundle\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.670942 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b8dl\" (UniqueName: \"kubernetes.io/projected/1c985555-77df-4e8b-a2b0-f1127eab2886-kube-api-access-9b8dl\") pod \"neutron-6cb99dd7c6-gp5c6\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.710809 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.766911 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81007298-01d9-43a2-8e26-33448a1d17e0" path="/var/lib/kubelet/pods/81007298-01d9-43a2-8e26-33448a1d17e0/volumes" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.768976 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e437ba-f67c-41cb-887b-a1d977b041f8" path="/var/lib/kubelet/pods/94e437ba-f67c-41cb-887b-a1d977b041f8/volumes" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.769772 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da695d3c-0710-4113-ad5c-6168aa3bbe2b" path="/var/lib/kubelet/pods/da695d3c-0710-4113-ad5c-6168aa3bbe2b/volumes" Jan 09 13:50:38 crc kubenswrapper[4919]: I0109 13:50:38.774982 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.582751 4919 generic.go:334] "Generic (PLEG): container finished" podID="25d27694-1c75-4c18-9d9d-2e766852453f" containerID="fb167accce72e04b9a09108d1a8ad2bc8cf774141de4b62410f3cae5d517b80b" exitCode=0 Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.583190 4919 generic.go:334] "Generic (PLEG): container finished" podID="25d27694-1c75-4c18-9d9d-2e766852453f" containerID="5a5a7f74341bf0f178ad69e20b9a6ee8729d2f84d5def0b42e3e3148a03624b2" exitCode=143 Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.583287 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d27694-1c75-4c18-9d9d-2e766852453f","Type":"ContainerDied","Data":"fb167accce72e04b9a09108d1a8ad2bc8cf774141de4b62410f3cae5d517b80b"} Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.583313 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d27694-1c75-4c18-9d9d-2e766852453f","Type":"ContainerDied","Data":"5a5a7f74341bf0f178ad69e20b9a6ee8729d2f84d5def0b42e3e3148a03624b2"} Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.589741 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdd978ccd-tx6fx" event={"ID":"158e1b10-ad5e-4a44-a3be-630a2d45bfdc","Type":"ContainerStarted","Data":"6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153"} Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.613201 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cb99dd7c6-gp5c6"] Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.636673 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.744741 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n7s7\" (UniqueName: \"kubernetes.io/projected/25d27694-1c75-4c18-9d9d-2e766852453f-kube-api-access-6n7s7\") pod \"25d27694-1c75-4c18-9d9d-2e766852453f\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.744838 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-combined-ca-bundle\") pod \"25d27694-1c75-4c18-9d9d-2e766852453f\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.744852 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-xp6jw"] Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.745055 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-httpd-run\") pod \"25d27694-1c75-4c18-9d9d-2e766852453f\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.745085 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-scripts\") pod \"25d27694-1c75-4c18-9d9d-2e766852453f\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.745123 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-logs\") pod \"25d27694-1c75-4c18-9d9d-2e766852453f\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.745198 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-public-tls-certs\") pod \"25d27694-1c75-4c18-9d9d-2e766852453f\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.745256 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"25d27694-1c75-4c18-9d9d-2e766852453f\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.745302 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-config-data\") pod \"25d27694-1c75-4c18-9d9d-2e766852453f\" (UID: \"25d27694-1c75-4c18-9d9d-2e766852453f\") " Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.745527 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "25d27694-1c75-4c18-9d9d-2e766852453f" (UID: "25d27694-1c75-4c18-9d9d-2e766852453f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.745726 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-logs" (OuterVolumeSpecName: "logs") pod "25d27694-1c75-4c18-9d9d-2e766852453f" (UID: "25d27694-1c75-4c18-9d9d-2e766852453f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.748386 4919 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.748412 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d27694-1c75-4c18-9d9d-2e766852453f-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.750812 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d27694-1c75-4c18-9d9d-2e766852453f-kube-api-access-6n7s7" (OuterVolumeSpecName: "kube-api-access-6n7s7") pod "25d27694-1c75-4c18-9d9d-2e766852453f" (UID: "25d27694-1c75-4c18-9d9d-2e766852453f"). InnerVolumeSpecName "kube-api-access-6n7s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.753476 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-scripts" (OuterVolumeSpecName: "scripts") pod "25d27694-1c75-4c18-9d9d-2e766852453f" (UID: "25d27694-1c75-4c18-9d9d-2e766852453f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.767492 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "25d27694-1c75-4c18-9d9d-2e766852453f" (UID: "25d27694-1c75-4c18-9d9d-2e766852453f"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.807284 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25d27694-1c75-4c18-9d9d-2e766852453f" (UID: "25d27694-1c75-4c18-9d9d-2e766852453f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.839908 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "25d27694-1c75-4c18-9d9d-2e766852453f" (UID: "25d27694-1c75-4c18-9d9d-2e766852453f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.840343 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-config-data" (OuterVolumeSpecName: "config-data") pod "25d27694-1c75-4c18-9d9d-2e766852453f" (UID: "25d27694-1c75-4c18-9d9d-2e766852453f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.850429 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.850468 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n7s7\" (UniqueName: \"kubernetes.io/projected/25d27694-1c75-4c18-9d9d-2e766852453f-kube-api-access-6n7s7\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.850483 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.850494 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.850504 4919 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d27694-1c75-4c18-9d9d-2e766852453f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.850528 4919 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.882090 4919 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 09 13:50:39 crc kubenswrapper[4919]: I0109 13:50:39.951958 4919 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.599758 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dd96cc4d-xnspb" event={"ID":"db2aeda5-21fd-4b61-bb59-d8d0b78884c2","Type":"ContainerStarted","Data":"3d640a222408bf497da81331e9427e406c254b05d9426047574a94a67087fb4e"} Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.601978 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb99dd7c6-gp5c6" event={"ID":"1c985555-77df-4e8b-a2b0-f1127eab2886","Type":"ContainerStarted","Data":"7bdba7b03e0f2aa797c7f5e07138ac2ca7bdf750e918fe216718bca831ab6b96"} Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.602053 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb99dd7c6-gp5c6" event={"ID":"1c985555-77df-4e8b-a2b0-f1127eab2886","Type":"ContainerStarted","Data":"a00a2ec12e1bc3bc57fd45a25731877f0802cbccfda89e8813be30d4f8f3fa79"} Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.602068 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb99dd7c6-gp5c6" event={"ID":"1c985555-77df-4e8b-a2b0-f1127eab2886","Type":"ContainerStarted","Data":"7836be13f6ebfda641eed77e9a15d703bf33173e0c9491cb4aa9b2fb4393f629"} Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.602102 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.604240 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.604269 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d27694-1c75-4c18-9d9d-2e766852453f","Type":"ContainerDied","Data":"0957de93b7dc9f13d0e4feea3421bbe6abc65147216b41c039a69aab46b6cc9f"} Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.604315 4919 scope.go:117] "RemoveContainer" containerID="fb167accce72e04b9a09108d1a8ad2bc8cf774141de4b62410f3cae5d517b80b" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.611726 4919 generic.go:334] "Generic (PLEG): container finished" podID="4702a56c-301a-472f-b539-aa0873b1bdd1" containerID="d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349" exitCode=0 Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.611791 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-xp6jw" event={"ID":"4702a56c-301a-472f-b539-aa0873b1bdd1","Type":"ContainerDied","Data":"d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349"} Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.611826 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-xp6jw" event={"ID":"4702a56c-301a-472f-b539-aa0873b1bdd1","Type":"ContainerStarted","Data":"f769b9bd93f2b33941dcfaede43ec01292f69ea2394de1d0d9df6cdc16919399"} Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.622549 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a09b42c8-eca5-4951-a549-9730a79a7308","Type":"ContainerStarted","Data":"70864129ff88517425a277ca45d2d360b7f37b92b399309fd290b9487dc2a18f"} Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.622725 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a09b42c8-eca5-4951-a549-9730a79a7308" containerName="glance-log" containerID="cri-o://93d24260279509ec55196c6a19f8e8819dbe60b58d5ae5e97ba98a102039680b" gracePeriod=30 Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.622973 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a09b42c8-eca5-4951-a549-9730a79a7308" containerName="glance-httpd" containerID="cri-o://70864129ff88517425a277ca45d2d360b7f37b92b399309fd290b9487dc2a18f" gracePeriod=30 Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.639735 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75dd96cc4d-xnspb" podStartSLOduration=26.127331957 podStartE2EDuration="26.639707444s" podCreationTimestamp="2026-01-09 13:50:14 +0000 UTC" firstStartedPulling="2026-01-09 13:50:37.254893102 +0000 UTC m=+1216.802732552" lastFinishedPulling="2026-01-09 13:50:37.767268589 +0000 UTC m=+1217.315108039" observedRunningTime="2026-01-09 13:50:40.631422958 +0000 UTC m=+1220.179262418" watchObservedRunningTime="2026-01-09 13:50:40.639707444 +0000 UTC m=+1220.187546894" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.644069 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2","Type":"ContainerStarted","Data":"e9e27490ca5cceadd32c796cb2dfb1ec9b49b2b17c3d9a47c725454b662ce14f"} Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.652620 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdd978ccd-tx6fx" event={"ID":"158e1b10-ad5e-4a44-a3be-630a2d45bfdc","Type":"ContainerStarted","Data":"5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64"} Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.666693 4919 scope.go:117] "RemoveContainer" containerID="5a5a7f74341bf0f178ad69e20b9a6ee8729d2f84d5def0b42e3e3148a03624b2" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.711893 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cb99dd7c6-gp5c6" podStartSLOduration=2.7118718680000002 podStartE2EDuration="2.711871868s" podCreationTimestamp="2026-01-09 13:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:40.697135901 +0000 UTC m=+1220.244975351" watchObservedRunningTime="2026-01-09 13:50:40.711871868 +0000 UTC m=+1220.259711318" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.769198 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=27.769166922 podStartE2EDuration="27.769166922s" podCreationTimestamp="2026-01-09 13:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:40.756555018 +0000 UTC m=+1220.304394468" watchObservedRunningTime="2026-01-09 13:50:40.769166922 +0000 UTC m=+1220.317006372" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.796315 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.817195 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.856133 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:40 crc kubenswrapper[4919]: E0109 13:50:40.856610 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d27694-1c75-4c18-9d9d-2e766852453f" containerName="glance-log" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.856631 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d27694-1c75-4c18-9d9d-2e766852453f" containerName="glance-log" Jan 09 13:50:40 crc kubenswrapper[4919]: E0109 13:50:40.856660 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d27694-1c75-4c18-9d9d-2e766852453f" containerName="glance-httpd" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.856669 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d27694-1c75-4c18-9d9d-2e766852453f" containerName="glance-httpd" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.856868 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d27694-1c75-4c18-9d9d-2e766852453f" containerName="glance-log" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.856894 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d27694-1c75-4c18-9d9d-2e766852453f" containerName="glance-httpd" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.864106 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.866444 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.866602 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.870705 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.877061 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bdd978ccd-tx6fx" podStartSLOduration=27.33182658 podStartE2EDuration="27.877037983s" podCreationTimestamp="2026-01-09 13:50:13 +0000 UTC" firstStartedPulling="2026-01-09 13:50:37.442872985 +0000 UTC m=+1216.990712435" lastFinishedPulling="2026-01-09 13:50:37.988084368 +0000 UTC m=+1217.535923838" observedRunningTime="2026-01-09 13:50:40.83548771 +0000 UTC m=+1220.383327160" watchObservedRunningTime="2026-01-09 13:50:40.877037983 +0000 UTC m=+1220.424877433" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.929575 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-584b4bc589-6qnkd"] Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.931689 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.933764 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.935024 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.973649 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-584b4bc589-6qnkd"] Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.980553 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-config-data\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.980613 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n5mz\" (UniqueName: \"kubernetes.io/projected/0d3d016b-608b-4a81-aeae-7b1e4c75d893-kube-api-access-4n5mz\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.980712 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.980753 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-logs\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.980794 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.980885 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-scripts\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.980959 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:40 crc kubenswrapper[4919]: I0109 13:50:40.981031 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083078 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-config\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083124 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-logs\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083151 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plrcw\" (UniqueName: \"kubernetes.io/projected/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-kube-api-access-plrcw\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083184 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083222 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-httpd-config\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083241 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-ovndb-tls-certs\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083271 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-internal-tls-certs\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083292 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-public-tls-certs\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083458 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-scripts\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083530 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083636 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083655 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-logs\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083679 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-combined-ca-bundle\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083774 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-config-data\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083807 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n5mz\" (UniqueName: \"kubernetes.io/projected/0d3d016b-608b-4a81-aeae-7b1e4c75d893-kube-api-access-4n5mz\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.083972 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.084553 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.084653 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.088845 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.090940 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-scripts\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.100894 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-config-data\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.107841 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n5mz\" (UniqueName: \"kubernetes.io/projected/0d3d016b-608b-4a81-aeae-7b1e4c75d893-kube-api-access-4n5mz\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.109392 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.135470 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.187705 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-httpd-config\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.187753 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-ovndb-tls-certs\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.187783 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-internal-tls-certs\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.187803 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-public-tls-certs\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.187857 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-combined-ca-bundle\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.187938 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-config\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.187964 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plrcw\" (UniqueName: \"kubernetes.io/projected/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-kube-api-access-plrcw\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.192714 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.201344 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-combined-ca-bundle\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.201651 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-config\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.202161 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-public-tls-certs\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.203330 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-ovndb-tls-certs\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.203483 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-internal-tls-certs\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.209638 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-httpd-config\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.210822 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plrcw\" (UniqueName: \"kubernetes.io/projected/b93b1e1b-72fa-443d-ba2c-e9c9920f918a-kube-api-access-plrcw\") pod \"neutron-584b4bc589-6qnkd\" (UID: \"b93b1e1b-72fa-443d-ba2c-e9c9920f918a\") " pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.264069 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.702491 4919 generic.go:334] "Generic (PLEG): container finished" podID="a09b42c8-eca5-4951-a549-9730a79a7308" containerID="70864129ff88517425a277ca45d2d360b7f37b92b399309fd290b9487dc2a18f" exitCode=0 Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.702994 4919 generic.go:334] "Generic (PLEG): container finished" podID="a09b42c8-eca5-4951-a549-9730a79a7308" containerID="93d24260279509ec55196c6a19f8e8819dbe60b58d5ae5e97ba98a102039680b" exitCode=143 Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.703102 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a09b42c8-eca5-4951-a549-9730a79a7308","Type":"ContainerDied","Data":"70864129ff88517425a277ca45d2d360b7f37b92b399309fd290b9487dc2a18f"} Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.703144 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a09b42c8-eca5-4951-a549-9730a79a7308","Type":"ContainerDied","Data":"93d24260279509ec55196c6a19f8e8819dbe60b58d5ae5e97ba98a102039680b"} Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.742984 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-425dh" event={"ID":"93e28fcf-1c97-40cf-bcdc-d63d2af19499","Type":"ContainerStarted","Data":"a57cd495eb14623d3434d4a3a0e51585d8dd21fdd1d577a2c61487d78f1465a7"} Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.787837 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-xp6jw" event={"ID":"4702a56c-301a-472f-b539-aa0873b1bdd1","Type":"ContainerStarted","Data":"3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566"} Jan 09 13:50:41 crc kubenswrapper[4919]: I0109 13:50:41.828426 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685444497c-xp6jw" podStartSLOduration=3.828406963 podStartE2EDuration="3.828406963s" podCreationTimestamp="2026-01-09 13:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:41.817637316 +0000 UTC m=+1221.365476766" watchObservedRunningTime="2026-01-09 13:50:41.828406963 +0000 UTC m=+1221.376246413" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.024454 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.120469 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-config-data\") pod \"a09b42c8-eca5-4951-a549-9730a79a7308\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.120592 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-logs\") pod \"a09b42c8-eca5-4951-a549-9730a79a7308\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.120665 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-combined-ca-bundle\") pod \"a09b42c8-eca5-4951-a549-9730a79a7308\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.120759 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-scripts\") pod \"a09b42c8-eca5-4951-a549-9730a79a7308\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.120807 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"a09b42c8-eca5-4951-a549-9730a79a7308\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.120836 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kvkj\" (UniqueName: \"kubernetes.io/projected/a09b42c8-eca5-4951-a549-9730a79a7308-kube-api-access-4kvkj\") pod \"a09b42c8-eca5-4951-a549-9730a79a7308\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.120895 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-internal-tls-certs\") pod \"a09b42c8-eca5-4951-a549-9730a79a7308\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.120927 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-httpd-run\") pod \"a09b42c8-eca5-4951-a549-9730a79a7308\" (UID: \"a09b42c8-eca5-4951-a549-9730a79a7308\") " Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.122273 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-logs" (OuterVolumeSpecName: "logs") pod "a09b42c8-eca5-4951-a549-9730a79a7308" (UID: "a09b42c8-eca5-4951-a549-9730a79a7308"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.122655 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a09b42c8-eca5-4951-a549-9730a79a7308" (UID: "a09b42c8-eca5-4951-a549-9730a79a7308"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.144262 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a09b42c8-eca5-4951-a549-9730a79a7308-kube-api-access-4kvkj" (OuterVolumeSpecName: "kube-api-access-4kvkj") pod "a09b42c8-eca5-4951-a549-9730a79a7308" (UID: "a09b42c8-eca5-4951-a549-9730a79a7308"). InnerVolumeSpecName "kube-api-access-4kvkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.145265 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-scripts" (OuterVolumeSpecName: "scripts") pod "a09b42c8-eca5-4951-a549-9730a79a7308" (UID: "a09b42c8-eca5-4951-a549-9730a79a7308"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.161422 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.173577 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "a09b42c8-eca5-4951-a549-9730a79a7308" (UID: "a09b42c8-eca5-4951-a549-9730a79a7308"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.179455 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a09b42c8-eca5-4951-a549-9730a79a7308" (UID: "a09b42c8-eca5-4951-a549-9730a79a7308"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.213120 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-config-data" (OuterVolumeSpecName: "config-data") pod "a09b42c8-eca5-4951-a549-9730a79a7308" (UID: "a09b42c8-eca5-4951-a549-9730a79a7308"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.214404 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a09b42c8-eca5-4951-a549-9730a79a7308" (UID: "a09b42c8-eca5-4951-a549-9730a79a7308"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.223034 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.223387 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.223402 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.223416 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.223457 4919 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.223471 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kvkj\" (UniqueName: \"kubernetes.io/projected/a09b42c8-eca5-4951-a549-9730a79a7308-kube-api-access-4kvkj\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.223482 4919 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09b42c8-eca5-4951-a549-9730a79a7308-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.223493 4919 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a09b42c8-eca5-4951-a549-9730a79a7308-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.264061 4919 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.325575 4919 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.765812 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d27694-1c75-4c18-9d9d-2e766852453f" path="/var/lib/kubelet/pods/25d27694-1c75-4c18-9d9d-2e766852453f/volumes" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.806843 4919 generic.go:334] "Generic (PLEG): container finished" podID="fd2e6850-0b12-460b-9da8-56a74f4324f3" containerID="b0ef61d3089ead87370e9d64df135e8ce258018b5271a18bbbf3ff9b807454b1" exitCode=0 Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.806931 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qvmd7" event={"ID":"fd2e6850-0b12-460b-9da8-56a74f4324f3","Type":"ContainerDied","Data":"b0ef61d3089ead87370e9d64df135e8ce258018b5271a18bbbf3ff9b807454b1"} Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.814118 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d3d016b-608b-4a81-aeae-7b1e4c75d893","Type":"ContainerStarted","Data":"c97efa3ef4fc1716baf90c8bc69bea0e368b28aca0b217281ba6b6849c81ab3e"} Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.819349 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a09b42c8-eca5-4951-a549-9730a79a7308","Type":"ContainerDied","Data":"4c84639263e9f4e93e85d9e70943193712f2bfe3faf9d207245dfbc541614041"} Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.819421 4919 scope.go:117] "RemoveContainer" containerID="70864129ff88517425a277ca45d2d360b7f37b92b399309fd290b9487dc2a18f" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.819454 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.819526 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.851395 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-425dh" podStartSLOduration=4.188125604 podStartE2EDuration="37.851175087s" podCreationTimestamp="2026-01-09 13:50:05 +0000 UTC" firstStartedPulling="2026-01-09 13:50:07.738643543 +0000 UTC m=+1187.286482993" lastFinishedPulling="2026-01-09 13:50:41.401693026 +0000 UTC m=+1220.949532476" observedRunningTime="2026-01-09 13:50:42.844723357 +0000 UTC m=+1222.392562807" watchObservedRunningTime="2026-01-09 13:50:42.851175087 +0000 UTC m=+1222.399014527" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.866580 4919 scope.go:117] "RemoveContainer" containerID="93d24260279509ec55196c6a19f8e8819dbe60b58d5ae5e97ba98a102039680b" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.889741 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.902429 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.938293 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:42 crc kubenswrapper[4919]: E0109 13:50:42.938853 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09b42c8-eca5-4951-a549-9730a79a7308" containerName="glance-httpd" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.938878 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09b42c8-eca5-4951-a549-9730a79a7308" containerName="glance-httpd" Jan 09 13:50:42 crc kubenswrapper[4919]: E0109 13:50:42.938915 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09b42c8-eca5-4951-a549-9730a79a7308" containerName="glance-log" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.938923 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09b42c8-eca5-4951-a549-9730a79a7308" containerName="glance-log" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.939227 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09b42c8-eca5-4951-a549-9730a79a7308" containerName="glance-httpd" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.939249 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09b42c8-eca5-4951-a549-9730a79a7308" containerName="glance-log" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.940571 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.947107 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.947354 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 13:50:42 crc kubenswrapper[4919]: I0109 13:50:42.962293 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.050514 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.050570 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.050609 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b77qm\" (UniqueName: \"kubernetes.io/projected/fce892da-35ae-4435-a61a-1ee629ddb17e-kube-api-access-b77qm\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.050642 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.050670 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-logs\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.050722 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.050743 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.050825 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.152280 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.152329 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.152404 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.152432 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.152453 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.152879 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.153828 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b77qm\" (UniqueName: \"kubernetes.io/projected/fce892da-35ae-4435-a61a-1ee629ddb17e-kube-api-access-b77qm\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.153863 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.153904 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-logs\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.154340 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-logs\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.154527 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.176047 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.181619 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.185528 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.194995 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b77qm\" (UniqueName: \"kubernetes.io/projected/fce892da-35ae-4435-a61a-1ee629ddb17e-kube-api-access-b77qm\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.209069 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.222069 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.257439 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.279547 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-584b4bc589-6qnkd"] Jan 09 13:50:43 crc kubenswrapper[4919]: W0109 13:50:43.286128 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb93b1e1b_72fa_443d_ba2c_e9c9920f918a.slice/crio-1897621b067b475fd9e37a981702130a8351a398df9cf23da44c642e8a762bc0 WatchSource:0}: Error finding container 1897621b067b475fd9e37a981702130a8351a398df9cf23da44c642e8a762bc0: Status 404 returned error can't find the container with id 1897621b067b475fd9e37a981702130a8351a398df9cf23da44c642e8a762bc0 Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.884363 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d3d016b-608b-4a81-aeae-7b1e4c75d893","Type":"ContainerStarted","Data":"eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68"} Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.890664 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-584b4bc589-6qnkd" event={"ID":"b93b1e1b-72fa-443d-ba2c-e9c9920f918a","Type":"ContainerStarted","Data":"2fdd5ce500d8232810f847fe577d25f96c86005cb74d15c495e238464eac2a87"} Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.890730 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-584b4bc589-6qnkd" event={"ID":"b93b1e1b-72fa-443d-ba2c-e9c9920f918a","Type":"ContainerStarted","Data":"1897621b067b475fd9e37a981702130a8351a398df9cf23da44c642e8a762bc0"} Jan 09 13:50:43 crc kubenswrapper[4919]: I0109 13:50:43.976942 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.256097 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.337743 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.337793 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.378285 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-fernet-keys\") pod \"fd2e6850-0b12-460b-9da8-56a74f4324f3\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.378443 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-scripts\") pod \"fd2e6850-0b12-460b-9da8-56a74f4324f3\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.378489 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw429\" (UniqueName: \"kubernetes.io/projected/fd2e6850-0b12-460b-9da8-56a74f4324f3-kube-api-access-gw429\") pod \"fd2e6850-0b12-460b-9da8-56a74f4324f3\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.378544 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-credential-keys\") pod \"fd2e6850-0b12-460b-9da8-56a74f4324f3\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.378599 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-combined-ca-bundle\") pod \"fd2e6850-0b12-460b-9da8-56a74f4324f3\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.378657 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-config-data\") pod \"fd2e6850-0b12-460b-9da8-56a74f4324f3\" (UID: \"fd2e6850-0b12-460b-9da8-56a74f4324f3\") " Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.384165 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-scripts" (OuterVolumeSpecName: "scripts") pod "fd2e6850-0b12-460b-9da8-56a74f4324f3" (UID: "fd2e6850-0b12-460b-9da8-56a74f4324f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.385050 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd2e6850-0b12-460b-9da8-56a74f4324f3-kube-api-access-gw429" (OuterVolumeSpecName: "kube-api-access-gw429") pod "fd2e6850-0b12-460b-9da8-56a74f4324f3" (UID: "fd2e6850-0b12-460b-9da8-56a74f4324f3"). InnerVolumeSpecName "kube-api-access-gw429". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.386352 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fd2e6850-0b12-460b-9da8-56a74f4324f3" (UID: "fd2e6850-0b12-460b-9da8-56a74f4324f3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.386514 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fd2e6850-0b12-460b-9da8-56a74f4324f3" (UID: "fd2e6850-0b12-460b-9da8-56a74f4324f3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.419278 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-config-data" (OuterVolumeSpecName: "config-data") pod "fd2e6850-0b12-460b-9da8-56a74f4324f3" (UID: "fd2e6850-0b12-460b-9da8-56a74f4324f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.422506 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd2e6850-0b12-460b-9da8-56a74f4324f3" (UID: "fd2e6850-0b12-460b-9da8-56a74f4324f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.481262 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.481338 4919 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.481350 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.481363 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw429\" (UniqueName: \"kubernetes.io/projected/fd2e6850-0b12-460b-9da8-56a74f4324f3-kube-api-access-gw429\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.481376 4919 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.481387 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2e6850-0b12-460b-9da8-56a74f4324f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.531770 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.531819 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.768861 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a09b42c8-eca5-4951-a549-9730a79a7308" path="/var/lib/kubelet/pods/a09b42c8-eca5-4951-a549-9730a79a7308/volumes" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.904473 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6575bd5545-2lr88"] Jan 09 13:50:44 crc kubenswrapper[4919]: E0109 13:50:44.904942 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2e6850-0b12-460b-9da8-56a74f4324f3" containerName="keystone-bootstrap" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.904959 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2e6850-0b12-460b-9da8-56a74f4324f3" containerName="keystone-bootstrap" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.905282 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd2e6850-0b12-460b-9da8-56a74f4324f3" containerName="keystone-bootstrap" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.906000 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.906004 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d3d016b-608b-4a81-aeae-7b1e4c75d893","Type":"ContainerStarted","Data":"1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90"} Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.909233 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.909366 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.910493 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-584b4bc589-6qnkd" event={"ID":"b93b1e1b-72fa-443d-ba2c-e9c9920f918a","Type":"ContainerStarted","Data":"dd03d46a4a4615b68f2305c4306f94757758d2eb373ee0cf6e7ae4a1772ce7d7"} Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.910916 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.922720 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6575bd5545-2lr88"] Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.964180 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qvmd7" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.964325 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qvmd7" event={"ID":"fd2e6850-0b12-460b-9da8-56a74f4324f3","Type":"ContainerDied","Data":"d14ceabb256c9b3740a267550fcecee03a7727fc28fec8207b012ab3266ce368"} Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.964350 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d14ceabb256c9b3740a267550fcecee03a7727fc28fec8207b012ab3266ce368" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.983811 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.9837859909999995 podStartE2EDuration="4.983785991s" podCreationTimestamp="2026-01-09 13:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:44.965470316 +0000 UTC m=+1224.513309766" watchObservedRunningTime="2026-01-09 13:50:44.983785991 +0000 UTC m=+1224.531625441" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.986363 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce892da-35ae-4435-a61a-1ee629ddb17e","Type":"ContainerStarted","Data":"fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0"} Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.986420 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce892da-35ae-4435-a61a-1ee629ddb17e","Type":"ContainerStarted","Data":"96625b7a7e4723f3f48f07ea5b9479f12d3635edcc195c0dc153596f4276cf81"} Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.990130 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-scripts\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.990197 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-fernet-keys\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.990262 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-credential-keys\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.990412 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-combined-ca-bundle\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.990507 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-internal-tls-certs\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.990557 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-public-tls-certs\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.990581 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-config-data\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:44 crc kubenswrapper[4919]: I0109 13:50:44.990694 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wfbx\" (UniqueName: \"kubernetes.io/projected/22246922-04ad-4013-a96a-71e00093dbed-kube-api-access-5wfbx\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.043680 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-584b4bc589-6qnkd" podStartSLOduration=5.043630299 podStartE2EDuration="5.043630299s" podCreationTimestamp="2026-01-09 13:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:45.037194549 +0000 UTC m=+1224.585034009" watchObservedRunningTime="2026-01-09 13:50:45.043630299 +0000 UTC m=+1224.591469749" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.094519 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-scripts\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.094581 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-fernet-keys\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.094640 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-credential-keys\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.094775 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-combined-ca-bundle\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.095382 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-internal-tls-certs\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.095448 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-public-tls-certs\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.095486 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-config-data\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.095559 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wfbx\" (UniqueName: \"kubernetes.io/projected/22246922-04ad-4013-a96a-71e00093dbed-kube-api-access-5wfbx\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.103041 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-combined-ca-bundle\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.104858 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-fernet-keys\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.106393 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-internal-tls-certs\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.106686 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-scripts\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.106973 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-credential-keys\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.108875 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-public-tls-certs\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.109937 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22246922-04ad-4013-a96a-71e00093dbed-config-data\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.115789 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wfbx\" (UniqueName: \"kubernetes.io/projected/22246922-04ad-4013-a96a-71e00093dbed-kube-api-access-5wfbx\") pod \"keystone-6575bd5545-2lr88\" (UID: \"22246922-04ad-4013-a96a-71e00093dbed\") " pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:45 crc kubenswrapper[4919]: I0109 13:50:45.280431 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:48 crc kubenswrapper[4919]: I0109 13:50:48.712427 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:50:48 crc kubenswrapper[4919]: I0109 13:50:48.797629 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-cmf6h"] Jan 09 13:50:48 crc kubenswrapper[4919]: I0109 13:50:48.798479 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" podUID="82036277-9b0e-4efd-8da5-9463b9998096" containerName="dnsmasq-dns" containerID="cri-o://51e6c8a8c3e973d2fb93f48fb04960f0d0478dc40a06c0f4a87fe90de4097611" gracePeriod=10 Jan 09 13:50:49 crc kubenswrapper[4919]: I0109 13:50:49.067475 4919 generic.go:334] "Generic (PLEG): container finished" podID="82036277-9b0e-4efd-8da5-9463b9998096" containerID="51e6c8a8c3e973d2fb93f48fb04960f0d0478dc40a06c0f4a87fe90de4097611" exitCode=0 Jan 09 13:50:49 crc kubenswrapper[4919]: I0109 13:50:49.067527 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" event={"ID":"82036277-9b0e-4efd-8da5-9463b9998096","Type":"ContainerDied","Data":"51e6c8a8c3e973d2fb93f48fb04960f0d0478dc40a06c0f4a87fe90de4097611"} Jan 09 13:50:50 crc kubenswrapper[4919]: I0109 13:50:50.076322 4919 generic.go:334] "Generic (PLEG): container finished" podID="93e28fcf-1c97-40cf-bcdc-d63d2af19499" containerID="a57cd495eb14623d3434d4a3a0e51585d8dd21fdd1d577a2c61487d78f1465a7" exitCode=0 Jan 09 13:50:50 crc kubenswrapper[4919]: I0109 13:50:50.076435 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-425dh" event={"ID":"93e28fcf-1c97-40cf-bcdc-d63d2af19499","Type":"ContainerDied","Data":"a57cd495eb14623d3434d4a3a0e51585d8dd21fdd1d577a2c61487d78f1465a7"} Jan 09 13:50:50 crc kubenswrapper[4919]: I0109 13:50:50.909630 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" podUID="82036277-9b0e-4efd-8da5-9463b9998096" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.194440 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.194509 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.242107 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.243738 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.730133 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-425dh" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.735403 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-combined-ca-bundle\") pod \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.735478 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-scripts\") pod \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.735510 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-config-data\") pod \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.735535 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e28fcf-1c97-40cf-bcdc-d63d2af19499-logs\") pod \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.735616 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njwpz\" (UniqueName: \"kubernetes.io/projected/93e28fcf-1c97-40cf-bcdc-d63d2af19499-kube-api-access-njwpz\") pod \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\" (UID: \"93e28fcf-1c97-40cf-bcdc-d63d2af19499\") " Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.736447 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e28fcf-1c97-40cf-bcdc-d63d2af19499-logs" (OuterVolumeSpecName: "logs") pod "93e28fcf-1c97-40cf-bcdc-d63d2af19499" (UID: "93e28fcf-1c97-40cf-bcdc-d63d2af19499"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.742736 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e28fcf-1c97-40cf-bcdc-d63d2af19499-kube-api-access-njwpz" (OuterVolumeSpecName: "kube-api-access-njwpz") pod "93e28fcf-1c97-40cf-bcdc-d63d2af19499" (UID: "93e28fcf-1c97-40cf-bcdc-d63d2af19499"). InnerVolumeSpecName "kube-api-access-njwpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.783156 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-scripts" (OuterVolumeSpecName: "scripts") pod "93e28fcf-1c97-40cf-bcdc-d63d2af19499" (UID: "93e28fcf-1c97-40cf-bcdc-d63d2af19499"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.789586 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-config-data" (OuterVolumeSpecName: "config-data") pod "93e28fcf-1c97-40cf-bcdc-d63d2af19499" (UID: "93e28fcf-1c97-40cf-bcdc-d63d2af19499"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.797392 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93e28fcf-1c97-40cf-bcdc-d63d2af19499" (UID: "93e28fcf-1c97-40cf-bcdc-d63d2af19499"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.841385 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e28fcf-1c97-40cf-bcdc-d63d2af19499-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.841417 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njwpz\" (UniqueName: \"kubernetes.io/projected/93e28fcf-1c97-40cf-bcdc-d63d2af19499-kube-api-access-njwpz\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.841428 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.841439 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.841448 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e28fcf-1c97-40cf-bcdc-d63d2af19499-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:51 crc kubenswrapper[4919]: I0109 13:50:51.977554 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.145860 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-config\") pod \"82036277-9b0e-4efd-8da5-9463b9998096\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.146311 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-sb\") pod \"82036277-9b0e-4efd-8da5-9463b9998096\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.146384 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-swift-storage-0\") pod \"82036277-9b0e-4efd-8da5-9463b9998096\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.146410 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-nb\") pod \"82036277-9b0e-4efd-8da5-9463b9998096\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.146464 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-svc\") pod \"82036277-9b0e-4efd-8da5-9463b9998096\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.146614 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwwhf\" (UniqueName: \"kubernetes.io/projected/82036277-9b0e-4efd-8da5-9463b9998096-kube-api-access-gwwhf\") pod \"82036277-9b0e-4efd-8da5-9463b9998096\" (UID: \"82036277-9b0e-4efd-8da5-9463b9998096\") " Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.152933 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82036277-9b0e-4efd-8da5-9463b9998096-kube-api-access-gwwhf" (OuterVolumeSpecName: "kube-api-access-gwwhf") pod "82036277-9b0e-4efd-8da5-9463b9998096" (UID: "82036277-9b0e-4efd-8da5-9463b9998096"). InnerVolumeSpecName "kube-api-access-gwwhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.203676 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-425dh" event={"ID":"93e28fcf-1c97-40cf-bcdc-d63d2af19499","Type":"ContainerDied","Data":"12e5aaa99e0efa533426908dca731dc7c2f6d465f542b0b2df3509f66e0bccba"} Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.203717 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12e5aaa99e0efa533426908dca731dc7c2f6d465f542b0b2df3509f66e0bccba" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.203787 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-425dh" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.205394 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82036277-9b0e-4efd-8da5-9463b9998096" (UID: "82036277-9b0e-4efd-8da5-9463b9998096"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.247586 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-config" (OuterVolumeSpecName: "config") pod "82036277-9b0e-4efd-8da5-9463b9998096" (UID: "82036277-9b0e-4efd-8da5-9463b9998096"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.251391 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-74bbf9c4b-kjq9x"] Jan 09 13:50:52 crc kubenswrapper[4919]: E0109 13:50:52.251791 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82036277-9b0e-4efd-8da5-9463b9998096" containerName="init" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.251807 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="82036277-9b0e-4efd-8da5-9463b9998096" containerName="init" Jan 09 13:50:52 crc kubenswrapper[4919]: E0109 13:50:52.251824 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82036277-9b0e-4efd-8da5-9463b9998096" containerName="dnsmasq-dns" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.251831 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="82036277-9b0e-4efd-8da5-9463b9998096" containerName="dnsmasq-dns" Jan 09 13:50:52 crc kubenswrapper[4919]: E0109 13:50:52.251840 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e28fcf-1c97-40cf-bcdc-d63d2af19499" containerName="placement-db-sync" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.251846 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e28fcf-1c97-40cf-bcdc-d63d2af19499" containerName="placement-db-sync" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.252095 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="82036277-9b0e-4efd-8da5-9463b9998096" containerName="dnsmasq-dns" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.252140 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e28fcf-1c97-40cf-bcdc-d63d2af19499" containerName="placement-db-sync" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.253646 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.254558 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwwhf\" (UniqueName: \"kubernetes.io/projected/82036277-9b0e-4efd-8da5-9463b9998096-kube-api-access-gwwhf\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.254580 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.254591 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.257777 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.258962 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.259080 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82036277-9b0e-4efd-8da5-9463b9998096" (UID: "82036277-9b0e-4efd-8da5-9463b9998096"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.259724 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hvfgk" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.259909 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.260189 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74bbf9c4b-kjq9x"] Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.261005 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.268576 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82036277-9b0e-4efd-8da5-9463b9998096" (UID: "82036277-9b0e-4efd-8da5-9463b9998096"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.269153 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.269743 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-cmf6h" event={"ID":"82036277-9b0e-4efd-8da5-9463b9998096","Type":"ContainerDied","Data":"5da999fc322046caae7652bca8503ae93d9ba6625daa09989c69bda0cfd6eb9e"} Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.269786 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.269806 4919 scope.go:117] "RemoveContainer" containerID="51e6c8a8c3e973d2fb93f48fb04960f0d0478dc40a06c0f4a87fe90de4097611" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.270012 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.283151 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82036277-9b0e-4efd-8da5-9463b9998096" (UID: "82036277-9b0e-4efd-8da5-9463b9998096"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.349430 4919 scope.go:117] "RemoveContainer" containerID="ee65f0ab8fb0898ae4212abe01ee302241ee9600a1c79e422a7250dada6c5296" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.357005 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-878d9\" (UniqueName: \"kubernetes.io/projected/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-kube-api-access-878d9\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.357067 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-combined-ca-bundle\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.357083 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-internal-tls-certs\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.357108 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-config-data\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.357146 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-scripts\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.357192 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-logs\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.357252 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-public-tls-certs\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.357334 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.357355 4919 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.357470 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82036277-9b0e-4efd-8da5-9463b9998096-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.441482 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6575bd5545-2lr88"] Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.459360 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-public-tls-certs\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.459452 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-878d9\" (UniqueName: \"kubernetes.io/projected/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-kube-api-access-878d9\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.459496 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-combined-ca-bundle\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.459522 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-internal-tls-certs\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.459558 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-config-data\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.459624 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-scripts\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.459721 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-logs\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.462567 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-logs\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.464459 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-internal-tls-certs\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.467387 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-public-tls-certs\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.467764 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-config-data\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.471624 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-scripts\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.473356 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-combined-ca-bundle\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.483007 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-878d9\" (UniqueName: \"kubernetes.io/projected/aafcf4ee-61ee-448a-91d4-d3b215b2c42e-kube-api-access-878d9\") pod \"placement-74bbf9c4b-kjq9x\" (UID: \"aafcf4ee-61ee-448a-91d4-d3b215b2c42e\") " pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.606308 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-cmf6h"] Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.613026 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-cmf6h"] Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.638508 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:52 crc kubenswrapper[4919]: I0109 13:50:52.796507 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82036277-9b0e-4efd-8da5-9463b9998096" path="/var/lib/kubelet/pods/82036277-9b0e-4efd-8da5-9463b9998096/volumes" Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.278656 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vz5pd" event={"ID":"0a9f81fc-067d-404d-b104-bba333d3911a","Type":"ContainerStarted","Data":"df6daf2cbbff0faad419762c41563157d6bc7046e79029c50803e04a858dbbc8"} Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.283698 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce892da-35ae-4435-a61a-1ee629ddb17e","Type":"ContainerStarted","Data":"85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a"} Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.286061 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9sb8m" event={"ID":"bec76c49-6c38-4168-ac7b-087460106d25","Type":"ContainerStarted","Data":"7a478b94cf5b0b6db679856218fe283b1b21278f01c109914d4ae4d4c0f1c30a"} Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.288064 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6575bd5545-2lr88" event={"ID":"22246922-04ad-4013-a96a-71e00093dbed","Type":"ContainerStarted","Data":"3cfaaa11fe6966ef0d96247e471dd634c70dda871b4d1fa94ded1a30da8943a9"} Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.288095 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6575bd5545-2lr88" event={"ID":"22246922-04ad-4013-a96a-71e00093dbed","Type":"ContainerStarted","Data":"89334a40e2def8128d8d713cdda9d81fa4f40e1e87f02bd8dd23f2b330c5a51e"} Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.288234 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.290259 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2","Type":"ContainerStarted","Data":"843d5a3226a10c1031b29bd041c3a0b80a659a9972449745f80c453e0d0dd7d3"} Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.303446 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vz5pd" podStartSLOduration=3.643757722 podStartE2EDuration="48.303429284s" podCreationTimestamp="2026-01-09 13:50:05 +0000 UTC" firstStartedPulling="2026-01-09 13:50:07.153514987 +0000 UTC m=+1186.701354437" lastFinishedPulling="2026-01-09 13:50:51.813186549 +0000 UTC m=+1231.361025999" observedRunningTime="2026-01-09 13:50:53.294558613 +0000 UTC m=+1232.842398063" watchObservedRunningTime="2026-01-09 13:50:53.303429284 +0000 UTC m=+1232.851268734" Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.318643 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74bbf9c4b-kjq9x"] Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.320171 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6575bd5545-2lr88" podStartSLOduration=9.32014985 podStartE2EDuration="9.32014985s" podCreationTimestamp="2026-01-09 13:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:53.31370853 +0000 UTC m=+1232.861547980" watchObservedRunningTime="2026-01-09 13:50:53.32014985 +0000 UTC m=+1232.867989300" Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.356546 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9sb8m" podStartSLOduration=3.822965007 podStartE2EDuration="48.356524654s" podCreationTimestamp="2026-01-09 13:50:05 +0000 UTC" firstStartedPulling="2026-01-09 13:50:07.305844004 +0000 UTC m=+1186.853683454" lastFinishedPulling="2026-01-09 13:50:51.839403651 +0000 UTC m=+1231.387243101" observedRunningTime="2026-01-09 13:50:53.332789724 +0000 UTC m=+1232.880629184" watchObservedRunningTime="2026-01-09 13:50:53.356524654 +0000 UTC m=+1232.904364104" Jan 09 13:50:53 crc kubenswrapper[4919]: I0109 13:50:53.389699 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.389677798 podStartE2EDuration="11.389677798s" podCreationTimestamp="2026-01-09 13:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:53.361753984 +0000 UTC m=+1232.909593424" watchObservedRunningTime="2026-01-09 13:50:53.389677798 +0000 UTC m=+1232.937517248" Jan 09 13:50:54 crc kubenswrapper[4919]: I0109 13:50:54.310889 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:50:54 crc kubenswrapper[4919]: I0109 13:50:54.311312 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:50:54 crc kubenswrapper[4919]: I0109 13:50:54.310813 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74bbf9c4b-kjq9x" event={"ID":"aafcf4ee-61ee-448a-91d4-d3b215b2c42e","Type":"ContainerStarted","Data":"c249565eb28548e709cd1da65b9b19fcfce01da8c0115ca1cc2c89b19f5a6658"} Jan 09 13:50:54 crc kubenswrapper[4919]: I0109 13:50:54.311346 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74bbf9c4b-kjq9x" event={"ID":"aafcf4ee-61ee-448a-91d4-d3b215b2c42e","Type":"ContainerStarted","Data":"75a539e14f9c970c1760c77d0e682af609a859b1545c85631d181512a4a9f0d2"} Jan 09 13:50:54 crc kubenswrapper[4919]: I0109 13:50:54.311364 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74bbf9c4b-kjq9x" event={"ID":"aafcf4ee-61ee-448a-91d4-d3b215b2c42e","Type":"ContainerStarted","Data":"4e73f138c8017653fe66fc54d7dfdb57868203ff7a007fefc0711bbf692f85d1"} Jan 09 13:50:54 crc kubenswrapper[4919]: I0109 13:50:54.312036 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:54 crc kubenswrapper[4919]: I0109 13:50:54.338305 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bdd978ccd-tx6fx" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 09 13:50:54 crc kubenswrapper[4919]: I0109 13:50:54.536893 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75dd96cc4d-xnspb" podUID="db2aeda5-21fd-4b61-bb59-d8d0b78884c2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 09 13:50:54 crc kubenswrapper[4919]: I0109 13:50:54.998698 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 13:50:55 crc kubenswrapper[4919]: I0109 13:50:55.021529 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-74bbf9c4b-kjq9x" podStartSLOduration=3.021513153 podStartE2EDuration="3.021513153s" podCreationTimestamp="2026-01-09 13:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:50:54.356516122 +0000 UTC m=+1233.904355572" watchObservedRunningTime="2026-01-09 13:50:55.021513153 +0000 UTC m=+1234.569352603" Jan 09 13:50:55 crc kubenswrapper[4919]: I0109 13:50:55.307253 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 13:50:55 crc kubenswrapper[4919]: I0109 13:50:55.322104 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:50:56 crc kubenswrapper[4919]: I0109 13:50:56.334158 4919 generic.go:334] "Generic (PLEG): container finished" podID="bec76c49-6c38-4168-ac7b-087460106d25" containerID="7a478b94cf5b0b6db679856218fe283b1b21278f01c109914d4ae4d4c0f1c30a" exitCode=0 Jan 09 13:50:56 crc kubenswrapper[4919]: I0109 13:50:56.334269 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9sb8m" event={"ID":"bec76c49-6c38-4168-ac7b-087460106d25","Type":"ContainerDied","Data":"7a478b94cf5b0b6db679856218fe283b1b21278f01c109914d4ae4d4c0f1c30a"} Jan 09 13:50:58 crc kubenswrapper[4919]: I0109 13:50:58.352478 4919 generic.go:334] "Generic (PLEG): container finished" podID="0a9f81fc-067d-404d-b104-bba333d3911a" containerID="df6daf2cbbff0faad419762c41563157d6bc7046e79029c50803e04a858dbbc8" exitCode=0 Jan 09 13:50:58 crc kubenswrapper[4919]: I0109 13:50:58.352572 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vz5pd" event={"ID":"0a9f81fc-067d-404d-b104-bba333d3911a","Type":"ContainerDied","Data":"df6daf2cbbff0faad419762c41563157d6bc7046e79029c50803e04a858dbbc8"} Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.176337 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.308457 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.326947 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-combined-ca-bundle\") pod \"bec76c49-6c38-4168-ac7b-087460106d25\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.327111 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-982gj\" (UniqueName: \"kubernetes.io/projected/bec76c49-6c38-4168-ac7b-087460106d25-kube-api-access-982gj\") pod \"bec76c49-6c38-4168-ac7b-087460106d25\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.327153 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-db-sync-config-data\") pod \"bec76c49-6c38-4168-ac7b-087460106d25\" (UID: \"bec76c49-6c38-4168-ac7b-087460106d25\") " Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.336737 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec76c49-6c38-4168-ac7b-087460106d25-kube-api-access-982gj" (OuterVolumeSpecName: "kube-api-access-982gj") pod "bec76c49-6c38-4168-ac7b-087460106d25" (UID: "bec76c49-6c38-4168-ac7b-087460106d25"). InnerVolumeSpecName "kube-api-access-982gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.337939 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bec76c49-6c38-4168-ac7b-087460106d25" (UID: "bec76c49-6c38-4168-ac7b-087460106d25"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.372786 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bec76c49-6c38-4168-ac7b-087460106d25" (UID: "bec76c49-6c38-4168-ac7b-087460106d25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.405876 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vz5pd" event={"ID":"0a9f81fc-067d-404d-b104-bba333d3911a","Type":"ContainerDied","Data":"c6b0b226e71e563793d5d8ba1a5e2b23666bf61285a03da27e1c18aa9190a0eb"} Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.405924 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6b0b226e71e563793d5d8ba1a5e2b23666bf61285a03da27e1c18aa9190a0eb" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.405991 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vz5pd" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.408281 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9sb8m" event={"ID":"bec76c49-6c38-4168-ac7b-087460106d25","Type":"ContainerDied","Data":"25ec9a1e6e956b733a250dbac76c6dbdf768a99a2ad7c21ebaa4d35e2e7d3c3b"} Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.408315 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ec9a1e6e956b733a250dbac76c6dbdf768a99a2ad7c21ebaa4d35e2e7d3c3b" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.408381 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9sb8m" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.413352 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2","Type":"ContainerStarted","Data":"9ea28ac6289796bb315c4ba1066c6b3fbe2b9be360102a8d9c166e7e30fa123a"} Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.413573 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="ceilometer-central-agent" containerID="cri-o://bf9b7f9a1d727c6b93dc2c2db21aad00674c0c5e4b9f563d3bec4ed53f66dab4" gracePeriod=30 Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.414408 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="sg-core" containerID="cri-o://843d5a3226a10c1031b29bd041c3a0b80a659a9972449745f80c453e0d0dd7d3" gracePeriod=30 Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.414554 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="proxy-httpd" containerID="cri-o://9ea28ac6289796bb315c4ba1066c6b3fbe2b9be360102a8d9c166e7e30fa123a" gracePeriod=30 Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.414655 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="ceilometer-notification-agent" containerID="cri-o://e9e27490ca5cceadd32c796cb2dfb1ec9b49b2b17c3d9a47c725454b662ce14f" gracePeriod=30 Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.414910 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.428449 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9rc9\" (UniqueName: \"kubernetes.io/projected/0a9f81fc-067d-404d-b104-bba333d3911a-kube-api-access-x9rc9\") pod \"0a9f81fc-067d-404d-b104-bba333d3911a\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.428511 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-scripts\") pod \"0a9f81fc-067d-404d-b104-bba333d3911a\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.428574 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-db-sync-config-data\") pod \"0a9f81fc-067d-404d-b104-bba333d3911a\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.428738 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-config-data\") pod \"0a9f81fc-067d-404d-b104-bba333d3911a\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.428786 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a9f81fc-067d-404d-b104-bba333d3911a-etc-machine-id\") pod \"0a9f81fc-067d-404d-b104-bba333d3911a\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.428814 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-combined-ca-bundle\") pod \"0a9f81fc-067d-404d-b104-bba333d3911a\" (UID: \"0a9f81fc-067d-404d-b104-bba333d3911a\") " Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.429248 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-982gj\" (UniqueName: \"kubernetes.io/projected/bec76c49-6c38-4168-ac7b-087460106d25-kube-api-access-982gj\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.429639 4919 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.429648 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bec76c49-6c38-4168-ac7b-087460106d25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.432183 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0a9f81fc-067d-404d-b104-bba333d3911a" (UID: "0a9f81fc-067d-404d-b104-bba333d3911a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.433782 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a9f81fc-067d-404d-b104-bba333d3911a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0a9f81fc-067d-404d-b104-bba333d3911a" (UID: "0a9f81fc-067d-404d-b104-bba333d3911a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.441611 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a9f81fc-067d-404d-b104-bba333d3911a-kube-api-access-x9rc9" (OuterVolumeSpecName: "kube-api-access-x9rc9") pod "0a9f81fc-067d-404d-b104-bba333d3911a" (UID: "0a9f81fc-067d-404d-b104-bba333d3911a"). InnerVolumeSpecName "kube-api-access-x9rc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.444196 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.954313949 podStartE2EDuration="57.444174394s" podCreationTimestamp="2026-01-09 13:50:05 +0000 UTC" firstStartedPulling="2026-01-09 13:50:07.679330778 +0000 UTC m=+1187.227170228" lastFinishedPulling="2026-01-09 13:51:02.169191223 +0000 UTC m=+1241.717030673" observedRunningTime="2026-01-09 13:51:02.43798894 +0000 UTC m=+1241.985828400" watchObservedRunningTime="2026-01-09 13:51:02.444174394 +0000 UTC m=+1241.992013844" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.456190 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-scripts" (OuterVolumeSpecName: "scripts") pod "0a9f81fc-067d-404d-b104-bba333d3911a" (UID: "0a9f81fc-067d-404d-b104-bba333d3911a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.459391 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a9f81fc-067d-404d-b104-bba333d3911a" (UID: "0a9f81fc-067d-404d-b104-bba333d3911a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.492070 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-config-data" (OuterVolumeSpecName: "config-data") pod "0a9f81fc-067d-404d-b104-bba333d3911a" (UID: "0a9f81fc-067d-404d-b104-bba333d3911a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.531789 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.531844 4919 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.531860 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.531869 4919 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a9f81fc-067d-404d-b104-bba333d3911a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.531878 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9f81fc-067d-404d-b104-bba333d3911a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:02 crc kubenswrapper[4919]: I0109 13:51:02.531889 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9rc9\" (UniqueName: \"kubernetes.io/projected/0a9f81fc-067d-404d-b104-bba333d3911a-kube-api-access-x9rc9\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.257896 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.257937 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.289721 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.304653 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.426618 4919 generic.go:334] "Generic (PLEG): container finished" podID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerID="843d5a3226a10c1031b29bd041c3a0b80a659a9972449745f80c453e0d0dd7d3" exitCode=2 Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.426914 4919 generic.go:334] "Generic (PLEG): container finished" podID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerID="bf9b7f9a1d727c6b93dc2c2db21aad00674c0c5e4b9f563d3bec4ed53f66dab4" exitCode=0 Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.426744 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2","Type":"ContainerDied","Data":"843d5a3226a10c1031b29bd041c3a0b80a659a9972449745f80c453e0d0dd7d3"} Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.426981 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2","Type":"ContainerDied","Data":"bf9b7f9a1d727c6b93dc2c2db21aad00674c0c5e4b9f563d3bec4ed53f66dab4"} Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.427567 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.427617 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.474198 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5d7884df69-vfc9g"] Jan 09 13:51:03 crc kubenswrapper[4919]: E0109 13:51:03.474691 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a9f81fc-067d-404d-b104-bba333d3911a" containerName="cinder-db-sync" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.474709 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a9f81fc-067d-404d-b104-bba333d3911a" containerName="cinder-db-sync" Jan 09 13:51:03 crc kubenswrapper[4919]: E0109 13:51:03.474729 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec76c49-6c38-4168-ac7b-087460106d25" containerName="barbican-db-sync" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.474736 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec76c49-6c38-4168-ac7b-087460106d25" containerName="barbican-db-sync" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.474896 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a9f81fc-067d-404d-b104-bba333d3911a" containerName="cinder-db-sync" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.474935 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec76c49-6c38-4168-ac7b-087460106d25" containerName="barbican-db-sync" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.476399 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.485190 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.485504 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.501048 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6bpwk" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.532941 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d7884df69-vfc9g"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.590790 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5bc67fd74-frwbh"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.592868 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.600967 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.606679 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-x2c66"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.608264 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.628284 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bc67fd74-frwbh"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.643232 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-x2c66"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.660562 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk8l9\" (UniqueName: \"kubernetes.io/projected/be2245b9-76ae-4599-ba6a-97e327453f95-kube-api-access-dk8l9\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.660629 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fcc721-300d-4084-9fbe-756903a4f58b-combined-ca-bundle\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.660674 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be2245b9-76ae-4599-ba6a-97e327453f95-config-data-custom\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.660720 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2245b9-76ae-4599-ba6a-97e327453f95-logs\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.660746 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15fcc721-300d-4084-9fbe-756903a4f58b-logs\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.660770 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15fcc721-300d-4084-9fbe-756903a4f58b-config-data-custom\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.660787 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmjh5\" (UniqueName: \"kubernetes.io/projected/15fcc721-300d-4084-9fbe-756903a4f58b-kube-api-access-tmjh5\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.660810 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fcc721-300d-4084-9fbe-756903a4f58b-config-data\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.660912 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2245b9-76ae-4599-ba6a-97e327453f95-config-data\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.660982 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2245b9-76ae-4599-ba6a-97e327453f95-combined-ca-bundle\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.735819 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.753974 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.761218 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.762632 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v8q9p" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.762820 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.762959 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.763077 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.765149 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be2245b9-76ae-4599-ba6a-97e327453f95-config-data-custom\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.765251 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2245b9-76ae-4599-ba6a-97e327453f95-logs\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.765286 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.765757 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2245b9-76ae-4599-ba6a-97e327453f95-logs\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772125 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15fcc721-300d-4084-9fbe-756903a4f58b-logs\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772201 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15fcc721-300d-4084-9fbe-756903a4f58b-logs\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772291 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-config\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772350 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15fcc721-300d-4084-9fbe-756903a4f58b-config-data-custom\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772371 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmjh5\" (UniqueName: \"kubernetes.io/projected/15fcc721-300d-4084-9fbe-756903a4f58b-kube-api-access-tmjh5\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772397 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772433 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fcc721-300d-4084-9fbe-756903a4f58b-config-data\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772449 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772515 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2245b9-76ae-4599-ba6a-97e327453f95-config-data\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772531 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772590 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2245b9-76ae-4599-ba6a-97e327453f95-combined-ca-bundle\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772626 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpkj\" (UniqueName: \"kubernetes.io/projected/ea780d2d-30ee-485d-9077-39c6f364d5a3-kube-api-access-5xpkj\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772654 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk8l9\" (UniqueName: \"kubernetes.io/projected/be2245b9-76ae-4599-ba6a-97e327453f95-kube-api-access-dk8l9\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.772754 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fcc721-300d-4084-9fbe-756903a4f58b-combined-ca-bundle\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.777314 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be2245b9-76ae-4599-ba6a-97e327453f95-config-data-custom\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.784931 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-x2c66"] Jan 09 13:51:03 crc kubenswrapper[4919]: E0109 13:51:03.788398 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-5xpkj ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" podUID="ea780d2d-30ee-485d-9077-39c6f364d5a3" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.791559 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15fcc721-300d-4084-9fbe-756903a4f58b-config-data-custom\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.792860 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fcc721-300d-4084-9fbe-756903a4f58b-combined-ca-bundle\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.794049 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2245b9-76ae-4599-ba6a-97e327453f95-config-data\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.800391 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2245b9-76ae-4599-ba6a-97e327453f95-combined-ca-bundle\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.812876 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fcc721-300d-4084-9fbe-756903a4f58b-config-data\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.823332 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk8l9\" (UniqueName: \"kubernetes.io/projected/be2245b9-76ae-4599-ba6a-97e327453f95-kube-api-access-dk8l9\") pod \"barbican-keystone-listener-5bc67fd74-frwbh\" (UID: \"be2245b9-76ae-4599-ba6a-97e327453f95\") " pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.832066 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmjh5\" (UniqueName: \"kubernetes.io/projected/15fcc721-300d-4084-9fbe-756903a4f58b-kube-api-access-tmjh5\") pod \"barbican-worker-5d7884df69-vfc9g\" (UID: \"15fcc721-300d-4084-9fbe-756903a4f58b\") " pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877366 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877478 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877517 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-config\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877548 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877570 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877600 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877627 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877655 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xpkj\" (UniqueName: \"kubernetes.io/projected/ea780d2d-30ee-485d-9077-39c6f364d5a3-kube-api-access-5xpkj\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877674 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877692 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877712 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xmhd\" (UniqueName: \"kubernetes.io/projected/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-kube-api-access-9xmhd\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.877736 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.878316 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-jtvzp"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.879862 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.880437 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.880595 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.880989 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-config\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.882448 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.884457 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.904699 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-jtvzp"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.918017 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84c89c8f4-klmnp"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.919713 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.919865 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.933199 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.936011 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xpkj\" (UniqueName: \"kubernetes.io/projected/ea780d2d-30ee-485d-9077-39c6f364d5a3-kube-api-access-5xpkj\") pod \"dnsmasq-dns-66cdd4b5b5-x2c66\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.944091 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84c89c8f4-klmnp"] Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.984483 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh5wr\" (UniqueName: \"kubernetes.io/projected/865625ee-ff29-4253-9398-c497da20c784-kube-api-access-vh5wr\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.987296 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.987491 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-combined-ca-bundle\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.987669 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.987767 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.987847 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.991151 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-config\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.992874 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xmhd\" (UniqueName: \"kubernetes.io/projected/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-kube-api-access-9xmhd\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.992969 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.993043 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.993118 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.993376 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdkkn\" (UniqueName: \"kubernetes.io/projected/a226a0fa-ed83-40e1-933e-af4c16c363b2-kube-api-access-wdkkn\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.993457 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.993612 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data-custom\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.993699 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.993848 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/865625ee-ff29-4253-9398-c497da20c784-logs\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.993957 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:03 crc kubenswrapper[4919]: I0109 13:51:03.988178 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:03.996791 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:03.999852 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.007146 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.010889 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.014614 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xmhd\" (UniqueName: \"kubernetes.io/projected/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-kube-api-access-9xmhd\") pod \"cinder-scheduler-0\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.016386 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.018075 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.024642 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.044283 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.098810 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d7884df69-vfc9g" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.101683 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data-custom\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.101745 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-scripts\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.101782 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.101815 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.101833 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/865625ee-ff29-4253-9398-c497da20c784-logs\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.101855 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.101889 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh5wr\" (UniqueName: \"kubernetes.io/projected/865625ee-ff29-4253-9398-c497da20c784-kube-api-access-vh5wr\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.101911 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.101928 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mctm2\" (UniqueName: \"kubernetes.io/projected/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-kube-api-access-mctm2\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.101961 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-combined-ca-bundle\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.102019 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data-custom\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.102045 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-logs\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.102072 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.102131 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-config\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.102153 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.102176 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.102203 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdkkn\" (UniqueName: \"kubernetes.io/projected/a226a0fa-ed83-40e1-933e-af4c16c363b2-kube-api-access-wdkkn\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.102238 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.103030 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.104440 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/865625ee-ff29-4253-9398-c497da20c784-logs\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.105094 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.105681 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-config\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.109573 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data-custom\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.109845 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.111545 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-combined-ca-bundle\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.117320 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.131514 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.137151 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdkkn\" (UniqueName: \"kubernetes.io/projected/a226a0fa-ed83-40e1-933e-af4c16c363b2-kube-api-access-wdkkn\") pod \"dnsmasq-dns-75dbb546bf-jtvzp\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.152206 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh5wr\" (UniqueName: \"kubernetes.io/projected/865625ee-ff29-4253-9398-c497da20c784-kube-api-access-vh5wr\") pod \"barbican-api-84c89c8f4-klmnp\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.207879 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-scripts\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.207921 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.207957 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.207997 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mctm2\" (UniqueName: \"kubernetes.io/projected/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-kube-api-access-mctm2\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.208228 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data-custom\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.208263 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-logs\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.208298 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.208637 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.212569 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-logs\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.213097 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.213194 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.223543 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.223804 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-scripts\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.224788 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data-custom\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.238486 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mctm2\" (UniqueName: \"kubernetes.io/projected/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-kube-api-access-mctm2\") pod \"cinder-api-0\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.300606 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.341912 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.355662 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.610787 4919 generic.go:334] "Generic (PLEG): container finished" podID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerID="e9e27490ca5cceadd32c796cb2dfb1ec9b49b2b17c3d9a47c725454b662ce14f" exitCode=0 Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.611322 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.612286 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2","Type":"ContainerDied","Data":"e9e27490ca5cceadd32c796cb2dfb1ec9b49b2b17c3d9a47c725454b662ce14f"} Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.616878 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bc67fd74-frwbh"] Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.644626 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:04 crc kubenswrapper[4919]: W0109 13:51:04.665146 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe2245b9_76ae_4599_ba6a_97e327453f95.slice/crio-24094fd6004635215054d92d5f06031a337f90c5c0f0ae6d01ea5e8330d38326 WatchSource:0}: Error finding container 24094fd6004635215054d92d5f06031a337f90c5c0f0ae6d01ea5e8330d38326: Status 404 returned error can't find the container with id 24094fd6004635215054d92d5f06031a337f90c5c0f0ae6d01ea5e8330d38326 Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.728844 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xpkj\" (UniqueName: \"kubernetes.io/projected/ea780d2d-30ee-485d-9077-39c6f364d5a3-kube-api-access-5xpkj\") pod \"ea780d2d-30ee-485d-9077-39c6f364d5a3\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.729045 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-swift-storage-0\") pod \"ea780d2d-30ee-485d-9077-39c6f364d5a3\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.729078 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-sb\") pod \"ea780d2d-30ee-485d-9077-39c6f364d5a3\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.729115 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-config\") pod \"ea780d2d-30ee-485d-9077-39c6f364d5a3\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.729139 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-nb\") pod \"ea780d2d-30ee-485d-9077-39c6f364d5a3\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.729169 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-svc\") pod \"ea780d2d-30ee-485d-9077-39c6f364d5a3\" (UID: \"ea780d2d-30ee-485d-9077-39c6f364d5a3\") " Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.730061 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea780d2d-30ee-485d-9077-39c6f364d5a3" (UID: "ea780d2d-30ee-485d-9077-39c6f364d5a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.730402 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ea780d2d-30ee-485d-9077-39c6f364d5a3" (UID: "ea780d2d-30ee-485d-9077-39c6f364d5a3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.731336 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea780d2d-30ee-485d-9077-39c6f364d5a3" (UID: "ea780d2d-30ee-485d-9077-39c6f364d5a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.731621 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea780d2d-30ee-485d-9077-39c6f364d5a3" (UID: "ea780d2d-30ee-485d-9077-39c6f364d5a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.731647 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-config" (OuterVolumeSpecName: "config") pod "ea780d2d-30ee-485d-9077-39c6f364d5a3" (UID: "ea780d2d-30ee-485d-9077-39c6f364d5a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.745620 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea780d2d-30ee-485d-9077-39c6f364d5a3-kube-api-access-5xpkj" (OuterVolumeSpecName: "kube-api-access-5xpkj") pod "ea780d2d-30ee-485d-9077-39c6f364d5a3" (UID: "ea780d2d-30ee-485d-9077-39c6f364d5a3"). InnerVolumeSpecName "kube-api-access-5xpkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.820815 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d7884df69-vfc9g"] Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.831944 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.831992 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.832012 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.832024 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xpkj\" (UniqueName: \"kubernetes.io/projected/ea780d2d-30ee-485d-9077-39c6f364d5a3-kube-api-access-5xpkj\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.832036 4919 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.832049 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea780d2d-30ee-485d-9077-39c6f364d5a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:04 crc kubenswrapper[4919]: I0109 13:51:04.836451 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.093271 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-jtvzp"] Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.190138 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84c89c8f4-klmnp"] Jan 09 13:51:05 crc kubenswrapper[4919]: W0109 13:51:05.195458 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae9b2b4_5dee_45e6_8eb4_2160ea8812b9.slice/crio-bc1eb1a7370d78a8f8852a6abe089c6b97fe388f09dfd480217b6e2a8e6a76d0 WatchSource:0}: Error finding container bc1eb1a7370d78a8f8852a6abe089c6b97fe388f09dfd480217b6e2a8e6a76d0: Status 404 returned error can't find the container with id bc1eb1a7370d78a8f8852a6abe089c6b97fe388f09dfd480217b6e2a8e6a76d0 Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.218343 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.649867 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9","Type":"ContainerStarted","Data":"bc1eb1a7370d78a8f8852a6abe089c6b97fe388f09dfd480217b6e2a8e6a76d0"} Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.658576 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f94fda4-49d1-4ca0-b5d0-e062ce94a042","Type":"ContainerStarted","Data":"5995374174e3360d29655ad85864965f959572e809fdbb591d312ffd578ee74c"} Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.664651 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d7884df69-vfc9g" event={"ID":"15fcc721-300d-4084-9fbe-756903a4f58b","Type":"ContainerStarted","Data":"70a46586c182f04015ec36763b22eea2917bb9e29c040181f3fea53a3a7a3637"} Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.679408 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c89c8f4-klmnp" event={"ID":"865625ee-ff29-4253-9398-c497da20c784","Type":"ContainerStarted","Data":"68a6f027203bcb9adb1da5b237987a47d650ae659caaff0409dfb6b315ed2c70"} Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.679469 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c89c8f4-klmnp" event={"ID":"865625ee-ff29-4253-9398-c497da20c784","Type":"ContainerStarted","Data":"cefadf531b652c8ba701e9d027038a18036b25de8369d7f404ee7bfa08d0768d"} Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.683976 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.694878 4919 generic.go:334] "Generic (PLEG): container finished" podID="a226a0fa-ed83-40e1-933e-af4c16c363b2" containerID="d1d1a27bd8f462c61b17b0ad36f6ab28c0277aa69f5a2c1be1a558f291239aa2" exitCode=0 Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.695004 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" event={"ID":"a226a0fa-ed83-40e1-933e-af4c16c363b2","Type":"ContainerDied","Data":"d1d1a27bd8f462c61b17b0ad36f6ab28c0277aa69f5a2c1be1a558f291239aa2"} Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.695035 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" event={"ID":"a226a0fa-ed83-40e1-933e-af4c16c363b2","Type":"ContainerStarted","Data":"4e042c2a1a5bb0c20a6afdb4d04f58cf0a76a6defae24013861591df4ff7f82f"} Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.703777 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" event={"ID":"be2245b9-76ae-4599-ba6a-97e327453f95","Type":"ContainerStarted","Data":"24094fd6004635215054d92d5f06031a337f90c5c0f0ae6d01ea5e8330d38326"} Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.703811 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-x2c66" Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.810492 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-x2c66"] Jan 09 13:51:05 crc kubenswrapper[4919]: I0109 13:51:05.823525 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-x2c66"] Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.714448 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f94fda4-49d1-4ca0-b5d0-e062ce94a042","Type":"ContainerStarted","Data":"911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e"} Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.719388 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c89c8f4-klmnp" event={"ID":"865625ee-ff29-4253-9398-c497da20c784","Type":"ContainerStarted","Data":"1952208d789073252e6b5946b90b309d41be21ccb4e89c5b17482e2f673f6c86"} Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.719512 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.719543 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.724309 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" event={"ID":"a226a0fa-ed83-40e1-933e-af4c16c363b2","Type":"ContainerStarted","Data":"644aa489382e29879af00ec8bbe5d0a2e65f378ec2dc65596c6c480e91714ac7"} Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.724476 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.752823 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84c89c8f4-klmnp" podStartSLOduration=3.752806413 podStartE2EDuration="3.752806413s" podCreationTimestamp="2026-01-09 13:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:06.740721522 +0000 UTC m=+1246.288560972" watchObservedRunningTime="2026-01-09 13:51:06.752806413 +0000 UTC m=+1246.300645863" Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.768626 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" podStartSLOduration=3.7686082770000002 podStartE2EDuration="3.768608277s" podCreationTimestamp="2026-01-09 13:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:06.760664719 +0000 UTC m=+1246.308504179" watchObservedRunningTime="2026-01-09 13:51:06.768608277 +0000 UTC m=+1246.316447727" Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.773476 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea780d2d-30ee-485d-9077-39c6f364d5a3" path="/var/lib/kubelet/pods/ea780d2d-30ee-485d-9077-39c6f364d5a3/volumes" Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.924159 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.924374 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:51:06 crc kubenswrapper[4919]: I0109 13:51:06.927534 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:07 crc kubenswrapper[4919]: I0109 13:51:07.264038 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:51:07 crc kubenswrapper[4919]: I0109 13:51:07.268971 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:51:07 crc kubenswrapper[4919]: I0109 13:51:07.739921 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" containerName="cinder-api-log" containerID="cri-o://911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e" gracePeriod=30 Jan 09 13:51:07 crc kubenswrapper[4919]: I0109 13:51:07.740355 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f94fda4-49d1-4ca0-b5d0-e062ce94a042","Type":"ContainerStarted","Data":"48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11"} Jan 09 13:51:07 crc kubenswrapper[4919]: I0109 13:51:07.740523 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" containerName="cinder-api" containerID="cri-o://48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11" gracePeriod=30 Jan 09 13:51:07 crc kubenswrapper[4919]: I0109 13:51:07.740847 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 09 13:51:07 crc kubenswrapper[4919]: I0109 13:51:07.769462 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.769436119 podStartE2EDuration="4.769436119s" podCreationTimestamp="2026-01-09 13:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:07.761703426 +0000 UTC m=+1247.309542886" watchObservedRunningTime="2026-01-09 13:51:07.769436119 +0000 UTC m=+1247.317275569" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.449353 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.552053 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data\") pod \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.552155 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-combined-ca-bundle\") pod \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.552239 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-etc-machine-id\") pod \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.552287 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mctm2\" (UniqueName: \"kubernetes.io/projected/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-kube-api-access-mctm2\") pod \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.552395 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-scripts\") pod \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.552460 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data-custom\") pod \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.552525 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-logs\") pod \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\" (UID: \"0f94fda4-49d1-4ca0-b5d0-e062ce94a042\") " Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.553164 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-logs" (OuterVolumeSpecName: "logs") pod "0f94fda4-49d1-4ca0-b5d0-e062ce94a042" (UID: "0f94fda4-49d1-4ca0-b5d0-e062ce94a042"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.553628 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0f94fda4-49d1-4ca0-b5d0-e062ce94a042" (UID: "0f94fda4-49d1-4ca0-b5d0-e062ce94a042"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.560625 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0f94fda4-49d1-4ca0-b5d0-e062ce94a042" (UID: "0f94fda4-49d1-4ca0-b5d0-e062ce94a042"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.561734 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-kube-api-access-mctm2" (OuterVolumeSpecName: "kube-api-access-mctm2") pod "0f94fda4-49d1-4ca0-b5d0-e062ce94a042" (UID: "0f94fda4-49d1-4ca0-b5d0-e062ce94a042"). InnerVolumeSpecName "kube-api-access-mctm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.568914 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-scripts" (OuterVolumeSpecName: "scripts") pod "0f94fda4-49d1-4ca0-b5d0-e062ce94a042" (UID: "0f94fda4-49d1-4ca0-b5d0-e062ce94a042"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.597759 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f94fda4-49d1-4ca0-b5d0-e062ce94a042" (UID: "0f94fda4-49d1-4ca0-b5d0-e062ce94a042"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.619673 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data" (OuterVolumeSpecName: "config-data") pod "0f94fda4-49d1-4ca0-b5d0-e062ce94a042" (UID: "0f94fda4-49d1-4ca0-b5d0-e062ce94a042"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.655599 4919 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.655649 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mctm2\" (UniqueName: \"kubernetes.io/projected/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-kube-api-access-mctm2\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.655683 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.655694 4919 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.655703 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.655712 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.655724 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f94fda4-49d1-4ca0-b5d0-e062ce94a042-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.803644 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.805008 4919 generic.go:334] "Generic (PLEG): container finished" podID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" containerID="48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11" exitCode=0 Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.805027 4919 generic.go:334] "Generic (PLEG): container finished" podID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" containerID="911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e" exitCode=143 Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.805081 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f94fda4-49d1-4ca0-b5d0-e062ce94a042","Type":"ContainerDied","Data":"48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11"} Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.805100 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f94fda4-49d1-4ca0-b5d0-e062ce94a042","Type":"ContainerDied","Data":"911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e"} Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.805112 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0f94fda4-49d1-4ca0-b5d0-e062ce94a042","Type":"ContainerDied","Data":"5995374174e3360d29655ad85864965f959572e809fdbb591d312ffd578ee74c"} Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.805129 4919 scope.go:117] "RemoveContainer" containerID="48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.805245 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.901792 4919 scope.go:117] "RemoveContainer" containerID="911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e" Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.902096 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d7884df69-vfc9g" event={"ID":"15fcc721-300d-4084-9fbe-756903a4f58b","Type":"ContainerStarted","Data":"85747ed8517039cac3b6ede4eae3292188238f70a90210ba00a9f4b431058038"} Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.902162 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d7884df69-vfc9g" event={"ID":"15fcc721-300d-4084-9fbe-756903a4f58b","Type":"ContainerStarted","Data":"56f767070e2efb1e418d74b73308df27c15da36bd52d1dad0545dbd9aa7322d8"} Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.924537 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.962263 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.968378 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" event={"ID":"be2245b9-76ae-4599-ba6a-97e327453f95","Type":"ContainerStarted","Data":"0cbb4bccc072b8265374b35af405df456992f2cadd22d5f98f8099393553f92c"} Jan 09 13:51:08 crc kubenswrapper[4919]: I0109 13:51:08.968467 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" event={"ID":"be2245b9-76ae-4599-ba6a-97e327453f95","Type":"ContainerStarted","Data":"2951cd19a8756573a5092241f264e0d456e3ab46426f37ebd4cfd7e571f1fbd3"} Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.003096 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5d7884df69-vfc9g" podStartSLOduration=3.182855663 podStartE2EDuration="6.003069302s" podCreationTimestamp="2026-01-09 13:51:03 +0000 UTC" firstStartedPulling="2026-01-09 13:51:04.878042047 +0000 UTC m=+1244.425881497" lastFinishedPulling="2026-01-09 13:51:07.698255686 +0000 UTC m=+1247.246095136" observedRunningTime="2026-01-09 13:51:08.931749196 +0000 UTC m=+1248.479588646" watchObservedRunningTime="2026-01-09 13:51:09.003069302 +0000 UTC m=+1248.550908772" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.003719 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 09 13:51:09 crc kubenswrapper[4919]: E0109 13:51:09.004260 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" containerName="cinder-api" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.004339 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" containerName="cinder-api" Jan 09 13:51:09 crc kubenswrapper[4919]: E0109 13:51:09.004415 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" containerName="cinder-api-log" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.004477 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" containerName="cinder-api-log" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.004824 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" containerName="cinder-api-log" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.004928 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" containerName="cinder-api" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.006023 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9","Type":"ContainerStarted","Data":"7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9"} Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.006222 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.011091 4919 scope.go:117] "RemoveContainer" containerID="48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11" Jan 09 13:51:09 crc kubenswrapper[4919]: E0109 13:51:09.030479 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11\": container with ID starting with 48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11 not found: ID does not exist" containerID="48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.030559 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11"} err="failed to get container status \"48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11\": rpc error: code = NotFound desc = could not find container \"48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11\": container with ID starting with 48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11 not found: ID does not exist" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.030601 4919 scope.go:117] "RemoveContainer" containerID="911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.031872 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.032421 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.032432 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 09 13:51:09 crc kubenswrapper[4919]: E0109 13:51:09.043842 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e\": container with ID starting with 911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e not found: ID does not exist" containerID="911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.043913 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e"} err="failed to get container status \"911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e\": rpc error: code = NotFound desc = could not find container \"911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e\": container with ID starting with 911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e not found: ID does not exist" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.043957 4919 scope.go:117] "RemoveContainer" containerID="48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.048446 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11"} err="failed to get container status \"48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11\": rpc error: code = NotFound desc = could not find container \"48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11\": container with ID starting with 48a93ada7ec79c7a3f09e526a29cd47362528c04264610763f89ec35b60abe11 not found: ID does not exist" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.049502 4919 scope.go:117] "RemoveContainer" containerID="911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.054684 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e"} err="failed to get container status \"911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e\": rpc error: code = NotFound desc = could not find container \"911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e\": container with ID starting with 911237b09a8cd216927ca1c0d2850984fa455b480b51d6735678dbfdf13f631e not found: ID does not exist" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.068704 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.068748 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-config-data\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.068768 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.068799 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.069081 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-scripts\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.069176 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk7vx\" (UniqueName: \"kubernetes.io/projected/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-kube-api-access-tk7vx\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.069484 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-logs\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.069515 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-config-data-custom\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.069582 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.108349 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.141133 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5bc67fd74-frwbh" podStartSLOduration=3.152570729 podStartE2EDuration="6.141111281s" podCreationTimestamp="2026-01-09 13:51:03 +0000 UTC" firstStartedPulling="2026-01-09 13:51:04.694982647 +0000 UTC m=+1244.242822097" lastFinishedPulling="2026-01-09 13:51:07.683523199 +0000 UTC m=+1247.231362649" observedRunningTime="2026-01-09 13:51:09.015581154 +0000 UTC m=+1248.563420604" watchObservedRunningTime="2026-01-09 13:51:09.141111281 +0000 UTC m=+1248.688950731" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.172189 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk7vx\" (UniqueName: \"kubernetes.io/projected/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-kube-api-access-tk7vx\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.172275 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-logs\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.172306 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-config-data-custom\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.172348 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.172381 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.172404 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-config-data\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.172425 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.172456 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.172512 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-scripts\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.177188 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-logs\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.182323 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.195196 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-config-data-custom\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.227001 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.227884 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk7vx\" (UniqueName: \"kubernetes.io/projected/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-kube-api-access-tk7vx\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.228648 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.229602 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.245197 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-config-data\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.253358 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b8d4fb5-64a0-4774-8f0f-273c476d7b81-scripts\") pod \"cinder-api-0\" (UID: \"0b8d4fb5-64a0-4774-8f0f-273c476d7b81\") " pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.376863 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.891806 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.913799 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-75dd96cc4d-xnspb" Jan 09 13:51:09 crc kubenswrapper[4919]: I0109 13:51:09.990348 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bdd978ccd-tx6fx"] Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.019812 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9","Type":"ContainerStarted","Data":"d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d"} Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.026070 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bdd978ccd-tx6fx" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon" containerID="cri-o://5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64" gracePeriod=30 Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.026023 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bdd978ccd-tx6fx" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon-log" containerID="cri-o://6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153" gracePeriod=30 Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.049655 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.738183413 podStartE2EDuration="7.049625885s" podCreationTimestamp="2026-01-09 13:51:03 +0000 UTC" firstStartedPulling="2026-01-09 13:51:05.199759812 +0000 UTC m=+1244.747599262" lastFinishedPulling="2026-01-09 13:51:06.511202284 +0000 UTC m=+1246.059041734" observedRunningTime="2026-01-09 13:51:10.042795325 +0000 UTC m=+1249.590634775" watchObservedRunningTime="2026-01-09 13:51:10.049625885 +0000 UTC m=+1249.597465335" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.216435 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 13:51:10 crc kubenswrapper[4919]: W0109 13:51:10.227382 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b8d4fb5_64a0_4774_8f0f_273c476d7b81.slice/crio-f213bdeec423eea2ad6dc2a6f963e4b05688ea7a4c743b8dc16fd826416d92ab WatchSource:0}: Error finding container f213bdeec423eea2ad6dc2a6f963e4b05688ea7a4c743b8dc16fd826416d92ab: Status 404 returned error can't find the container with id f213bdeec423eea2ad6dc2a6f963e4b05688ea7a4c743b8dc16fd826416d92ab Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.444561 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56f5497b64-ws7gk"] Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.446938 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.452726 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.453119 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.461464 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56f5497b64-ws7gk"] Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.612638 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6s7b\" (UniqueName: \"kubernetes.io/projected/f23efa08-cf06-4a61-a081-60b52efe8e8f-kube-api-access-f6s7b\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.612704 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-public-tls-certs\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.612737 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-internal-tls-certs\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.612789 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-config-data\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.612826 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-combined-ca-bundle\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.612862 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23efa08-cf06-4a61-a081-60b52efe8e8f-logs\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.612884 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-config-data-custom\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.714507 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23efa08-cf06-4a61-a081-60b52efe8e8f-logs\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.714579 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-config-data-custom\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.714651 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6s7b\" (UniqueName: \"kubernetes.io/projected/f23efa08-cf06-4a61-a081-60b52efe8e8f-kube-api-access-f6s7b\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.714692 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-public-tls-certs\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.714733 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-internal-tls-certs\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.714787 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-config-data\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.714833 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-combined-ca-bundle\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.715916 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23efa08-cf06-4a61-a081-60b52efe8e8f-logs\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.744252 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-config-data\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.744853 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-public-tls-certs\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.746135 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-config-data-custom\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.746603 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-internal-tls-certs\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.748990 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23efa08-cf06-4a61-a081-60b52efe8e8f-combined-ca-bundle\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.751978 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6s7b\" (UniqueName: \"kubernetes.io/projected/f23efa08-cf06-4a61-a081-60b52efe8e8f-kube-api-access-f6s7b\") pod \"barbican-api-56f5497b64-ws7gk\" (UID: \"f23efa08-cf06-4a61-a081-60b52efe8e8f\") " pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.783287 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f94fda4-49d1-4ca0-b5d0-e062ce94a042" path="/var/lib/kubelet/pods/0f94fda4-49d1-4ca0-b5d0-e062ce94a042/volumes" Jan 09 13:51:10 crc kubenswrapper[4919]: I0109 13:51:10.798990 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:11 crc kubenswrapper[4919]: I0109 13:51:11.053745 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0b8d4fb5-64a0-4774-8f0f-273c476d7b81","Type":"ContainerStarted","Data":"f213bdeec423eea2ad6dc2a6f963e4b05688ea7a4c743b8dc16fd826416d92ab"} Jan 09 13:51:11 crc kubenswrapper[4919]: I0109 13:51:11.287248 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-584b4bc589-6qnkd" Jan 09 13:51:11 crc kubenswrapper[4919]: I0109 13:51:11.384987 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cb99dd7c6-gp5c6"] Jan 09 13:51:11 crc kubenswrapper[4919]: I0109 13:51:11.385273 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cb99dd7c6-gp5c6" podUID="1c985555-77df-4e8b-a2b0-f1127eab2886" containerName="neutron-api" containerID="cri-o://a00a2ec12e1bc3bc57fd45a25731877f0802cbccfda89e8813be30d4f8f3fa79" gracePeriod=30 Jan 09 13:51:11 crc kubenswrapper[4919]: I0109 13:51:11.385365 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cb99dd7c6-gp5c6" podUID="1c985555-77df-4e8b-a2b0-f1127eab2886" containerName="neutron-httpd" containerID="cri-o://7bdba7b03e0f2aa797c7f5e07138ac2ca7bdf750e918fe216718bca831ab6b96" gracePeriod=30 Jan 09 13:51:11 crc kubenswrapper[4919]: I0109 13:51:11.401180 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56f5497b64-ws7gk"] Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.065606 4919 generic.go:334] "Generic (PLEG): container finished" podID="1c985555-77df-4e8b-a2b0-f1127eab2886" containerID="7bdba7b03e0f2aa797c7f5e07138ac2ca7bdf750e918fe216718bca831ab6b96" exitCode=0 Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.066290 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb99dd7c6-gp5c6" event={"ID":"1c985555-77df-4e8b-a2b0-f1127eab2886","Type":"ContainerDied","Data":"7bdba7b03e0f2aa797c7f5e07138ac2ca7bdf750e918fe216718bca831ab6b96"} Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.067883 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0b8d4fb5-64a0-4774-8f0f-273c476d7b81","Type":"ContainerStarted","Data":"8843b67fbd1502997dde7bbb2ea2927253fb76b02ceb970740410fda74fa71c6"} Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.067907 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0b8d4fb5-64a0-4774-8f0f-273c476d7b81","Type":"ContainerStarted","Data":"9fce929042b6dd1b591faf3e3d2ce77753baf80f3106652976c48b9e7b13d2cf"} Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.068913 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.074067 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f5497b64-ws7gk" event={"ID":"f23efa08-cf06-4a61-a081-60b52efe8e8f","Type":"ContainerStarted","Data":"44fdbad129661ce33ef66f0893197c5d4bae93fed0e2ebdc2e02282b48d21e2b"} Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.074117 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f5497b64-ws7gk" event={"ID":"f23efa08-cf06-4a61-a081-60b52efe8e8f","Type":"ContainerStarted","Data":"de1805f6d5d849cbe99e643c65d9f5e79cec9db1a0f0b97221efff7b9d13b634"} Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.074130 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f5497b64-ws7gk" event={"ID":"f23efa08-cf06-4a61-a081-60b52efe8e8f","Type":"ContainerStarted","Data":"af37d1d5c30e4004f3a27a80b4100d76231b07ba8caa3223af022647e5700560"} Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.074301 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.074345 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.096547 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.096529458 podStartE2EDuration="4.096529458s" podCreationTimestamp="2026-01-09 13:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:12.091424581 +0000 UTC m=+1251.639264031" watchObservedRunningTime="2026-01-09 13:51:12.096529458 +0000 UTC m=+1251.644368908" Jan 09 13:51:12 crc kubenswrapper[4919]: I0109 13:51:12.122605 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56f5497b64-ws7gk" podStartSLOduration=2.122582517 podStartE2EDuration="2.122582517s" podCreationTimestamp="2026-01-09 13:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:12.111903751 +0000 UTC m=+1251.659743201" watchObservedRunningTime="2026-01-09 13:51:12.122582517 +0000 UTC m=+1251.670421967" Jan 09 13:51:14 crc kubenswrapper[4919]: I0109 13:51:14.100045 4919 generic.go:334] "Generic (PLEG): container finished" podID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerID="5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64" exitCode=0 Jan 09 13:51:14 crc kubenswrapper[4919]: I0109 13:51:14.101380 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdd978ccd-tx6fx" event={"ID":"158e1b10-ad5e-4a44-a3be-630a2d45bfdc","Type":"ContainerDied","Data":"5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64"} Jan 09 13:51:14 crc kubenswrapper[4919]: I0109 13:51:14.210278 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 09 13:51:14 crc kubenswrapper[4919]: I0109 13:51:14.302405 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:51:14 crc kubenswrapper[4919]: I0109 13:51:14.337351 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7bdd978ccd-tx6fx" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 09 13:51:14 crc kubenswrapper[4919]: I0109 13:51:14.373118 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-xp6jw"] Jan 09 13:51:14 crc kubenswrapper[4919]: I0109 13:51:14.373370 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685444497c-xp6jw" podUID="4702a56c-301a-472f-b539-aa0873b1bdd1" containerName="dnsmasq-dns" containerID="cri-o://3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566" gracePeriod=10 Jan 09 13:51:14 crc kubenswrapper[4919]: I0109 13:51:14.542694 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.003908 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.117785 4919 generic.go:334] "Generic (PLEG): container finished" podID="4702a56c-301a-472f-b539-aa0873b1bdd1" containerID="3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566" exitCode=0 Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.119023 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-xp6jw" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.119344 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-xp6jw" event={"ID":"4702a56c-301a-472f-b539-aa0873b1bdd1","Type":"ContainerDied","Data":"3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566"} Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.119395 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-xp6jw" event={"ID":"4702a56c-301a-472f-b539-aa0873b1bdd1","Type":"ContainerDied","Data":"f769b9bd93f2b33941dcfaede43ec01292f69ea2394de1d0d9df6cdc16919399"} Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.119418 4919 scope.go:117] "RemoveContainer" containerID="3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.138304 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7lkn\" (UniqueName: \"kubernetes.io/projected/4702a56c-301a-472f-b539-aa0873b1bdd1-kube-api-access-x7lkn\") pod \"4702a56c-301a-472f-b539-aa0873b1bdd1\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.138676 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-nb\") pod \"4702a56c-301a-472f-b539-aa0873b1bdd1\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.138717 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-swift-storage-0\") pod \"4702a56c-301a-472f-b539-aa0873b1bdd1\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.138777 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-svc\") pod \"4702a56c-301a-472f-b539-aa0873b1bdd1\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.138802 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-config\") pod \"4702a56c-301a-472f-b539-aa0873b1bdd1\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.138858 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-sb\") pod \"4702a56c-301a-472f-b539-aa0873b1bdd1\" (UID: \"4702a56c-301a-472f-b539-aa0873b1bdd1\") " Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.144462 4919 scope.go:117] "RemoveContainer" containerID="d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.178570 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4702a56c-301a-472f-b539-aa0873b1bdd1-kube-api-access-x7lkn" (OuterVolumeSpecName: "kube-api-access-x7lkn") pod "4702a56c-301a-472f-b539-aa0873b1bdd1" (UID: "4702a56c-301a-472f-b539-aa0873b1bdd1"). InnerVolumeSpecName "kube-api-access-x7lkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.232013 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.240857 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7lkn\" (UniqueName: \"kubernetes.io/projected/4702a56c-301a-472f-b539-aa0873b1bdd1-kube-api-access-x7lkn\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.269610 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4702a56c-301a-472f-b539-aa0873b1bdd1" (UID: "4702a56c-301a-472f-b539-aa0873b1bdd1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.289822 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4702a56c-301a-472f-b539-aa0873b1bdd1" (UID: "4702a56c-301a-472f-b539-aa0873b1bdd1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.296674 4919 scope.go:117] "RemoveContainer" containerID="3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.303738 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4702a56c-301a-472f-b539-aa0873b1bdd1" (UID: "4702a56c-301a-472f-b539-aa0873b1bdd1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:15 crc kubenswrapper[4919]: E0109 13:51:15.304194 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566\": container with ID starting with 3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566 not found: ID does not exist" containerID="3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.304364 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566"} err="failed to get container status \"3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566\": rpc error: code = NotFound desc = could not find container \"3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566\": container with ID starting with 3b9ff51c6109c1a844bb3b2a663511d2a48cb682c0273c30dbbdd1400699b566 not found: ID does not exist" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.304497 4919 scope.go:117] "RemoveContainer" containerID="d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.305942 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4702a56c-301a-472f-b539-aa0873b1bdd1" (UID: "4702a56c-301a-472f-b539-aa0873b1bdd1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:15 crc kubenswrapper[4919]: E0109 13:51:15.306469 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349\": container with ID starting with d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349 not found: ID does not exist" containerID="d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.306559 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349"} err="failed to get container status \"d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349\": rpc error: code = NotFound desc = could not find container \"d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349\": container with ID starting with d6f6cf438f1dda786986716c28824e60027356bc2cba81a40d36a89ab8545349 not found: ID does not exist" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.332722 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-config" (OuterVolumeSpecName: "config") pod "4702a56c-301a-472f-b539-aa0873b1bdd1" (UID: "4702a56c-301a-472f-b539-aa0873b1bdd1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.342391 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.342620 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.342700 4919 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.342761 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.342831 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4702a56c-301a-472f-b539-aa0873b1bdd1-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.456061 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-xp6jw"] Jan 09 13:51:15 crc kubenswrapper[4919]: I0109 13:51:15.473171 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685444497c-xp6jw"] Jan 09 13:51:16 crc kubenswrapper[4919]: I0109 13:51:16.137239 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" containerName="cinder-scheduler" containerID="cri-o://7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9" gracePeriod=30 Jan 09 13:51:16 crc kubenswrapper[4919]: I0109 13:51:16.137525 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" containerName="probe" containerID="cri-o://d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d" gracePeriod=30 Jan 09 13:51:16 crc kubenswrapper[4919]: I0109 13:51:16.470733 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:16 crc kubenswrapper[4919]: I0109 13:51:16.762812 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4702a56c-301a-472f-b539-aa0873b1bdd1" path="/var/lib/kubelet/pods/4702a56c-301a-472f-b539-aa0873b1bdd1/volumes" Jan 09 13:51:16 crc kubenswrapper[4919]: I0109 13:51:16.812667 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.167385 4919 generic.go:334] "Generic (PLEG): container finished" podID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" containerID="d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d" exitCode=0 Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.167879 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9","Type":"ContainerDied","Data":"d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d"} Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.179237 4919 generic.go:334] "Generic (PLEG): container finished" podID="1c985555-77df-4e8b-a2b0-f1127eab2886" containerID="a00a2ec12e1bc3bc57fd45a25731877f0802cbccfda89e8813be30d4f8f3fa79" exitCode=0 Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.179325 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb99dd7c6-gp5c6" event={"ID":"1c985555-77df-4e8b-a2b0-f1127eab2886","Type":"ContainerDied","Data":"a00a2ec12e1bc3bc57fd45a25731877f0802cbccfda89e8813be30d4f8f3fa79"} Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.358849 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6575bd5545-2lr88" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.578421 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.696549 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-httpd-config\") pod \"1c985555-77df-4e8b-a2b0-f1127eab2886\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.696653 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b8dl\" (UniqueName: \"kubernetes.io/projected/1c985555-77df-4e8b-a2b0-f1127eab2886-kube-api-access-9b8dl\") pod \"1c985555-77df-4e8b-a2b0-f1127eab2886\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.696826 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-ovndb-tls-certs\") pod \"1c985555-77df-4e8b-a2b0-f1127eab2886\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.696957 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-config\") pod \"1c985555-77df-4e8b-a2b0-f1127eab2886\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.696977 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-combined-ca-bundle\") pod \"1c985555-77df-4e8b-a2b0-f1127eab2886\" (UID: \"1c985555-77df-4e8b-a2b0-f1127eab2886\") " Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.703644 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1c985555-77df-4e8b-a2b0-f1127eab2886" (UID: "1c985555-77df-4e8b-a2b0-f1127eab2886"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.716541 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c985555-77df-4e8b-a2b0-f1127eab2886-kube-api-access-9b8dl" (OuterVolumeSpecName: "kube-api-access-9b8dl") pod "1c985555-77df-4e8b-a2b0-f1127eab2886" (UID: "1c985555-77df-4e8b-a2b0-f1127eab2886"). InnerVolumeSpecName "kube-api-access-9b8dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.750371 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c985555-77df-4e8b-a2b0-f1127eab2886" (UID: "1c985555-77df-4e8b-a2b0-f1127eab2886"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.765703 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-config" (OuterVolumeSpecName: "config") pod "1c985555-77df-4e8b-a2b0-f1127eab2886" (UID: "1c985555-77df-4e8b-a2b0-f1127eab2886"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.793964 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1c985555-77df-4e8b-a2b0-f1127eab2886" (UID: "1c985555-77df-4e8b-a2b0-f1127eab2886"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.800310 4919 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.800343 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.800353 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.800362 4919 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c985555-77df-4e8b-a2b0-f1127eab2886-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:17 crc kubenswrapper[4919]: I0109 13:51:17.800372 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b8dl\" (UniqueName: \"kubernetes.io/projected/1c985555-77df-4e8b-a2b0-f1127eab2886-kube-api-access-9b8dl\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:18 crc kubenswrapper[4919]: I0109 13:51:18.193873 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb99dd7c6-gp5c6" event={"ID":"1c985555-77df-4e8b-a2b0-f1127eab2886","Type":"ContainerDied","Data":"7836be13f6ebfda641eed77e9a15d703bf33173e0c9491cb4aa9b2fb4393f629"} Jan 09 13:51:18 crc kubenswrapper[4919]: I0109 13:51:18.193946 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb99dd7c6-gp5c6" Jan 09 13:51:18 crc kubenswrapper[4919]: I0109 13:51:18.194203 4919 scope.go:117] "RemoveContainer" containerID="7bdba7b03e0f2aa797c7f5e07138ac2ca7bdf750e918fe216718bca831ab6b96" Jan 09 13:51:18 crc kubenswrapper[4919]: I0109 13:51:18.232781 4919 scope.go:117] "RemoveContainer" containerID="a00a2ec12e1bc3bc57fd45a25731877f0802cbccfda89e8813be30d4f8f3fa79" Jan 09 13:51:18 crc kubenswrapper[4919]: I0109 13:51:18.246927 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cb99dd7c6-gp5c6"] Jan 09 13:51:18 crc kubenswrapper[4919]: I0109 13:51:18.257882 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6cb99dd7c6-gp5c6"] Jan 09 13:51:18 crc kubenswrapper[4919]: I0109 13:51:18.765040 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c985555-77df-4e8b-a2b0-f1127eab2886" path="/var/lib/kubelet/pods/1c985555-77df-4e8b-a2b0-f1127eab2886/volumes" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.063697 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.173662 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-combined-ca-bundle\") pod \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.174030 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data-custom\") pod \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.174097 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xmhd\" (UniqueName: \"kubernetes.io/projected/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-kube-api-access-9xmhd\") pod \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.174294 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data\") pod \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.174319 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-scripts\") pod \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.174358 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-etc-machine-id\") pod \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\" (UID: \"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9\") " Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.174795 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" (UID: "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.181840 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-kube-api-access-9xmhd" (OuterVolumeSpecName: "kube-api-access-9xmhd") pod "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" (UID: "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9"). InnerVolumeSpecName "kube-api-access-9xmhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.184010 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-scripts" (OuterVolumeSpecName: "scripts") pod "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" (UID: "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.198449 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" (UID: "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.258537 4919 generic.go:334] "Generic (PLEG): container finished" podID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" containerID="7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9" exitCode=0 Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.258857 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9","Type":"ContainerDied","Data":"7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9"} Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.258976 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9","Type":"ContainerDied","Data":"bc1eb1a7370d78a8f8852a6abe089c6b97fe388f09dfd480217b6e2a8e6a76d0"} Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.259021 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.259079 4919 scope.go:117] "RemoveContainer" containerID="d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.267407 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" (UID: "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.276105 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xmhd\" (UniqueName: \"kubernetes.io/projected/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-kube-api-access-9xmhd\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.276389 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.276475 4919 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.276541 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.276623 4919 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.300302 4919 scope.go:117] "RemoveContainer" containerID="7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.335694 4919 scope.go:117] "RemoveContainer" containerID="d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d" Jan 09 13:51:21 crc kubenswrapper[4919]: E0109 13:51:21.336117 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d\": container with ID starting with d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d not found: ID does not exist" containerID="d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.336172 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d"} err="failed to get container status \"d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d\": rpc error: code = NotFound desc = could not find container \"d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d\": container with ID starting with d03d7fe3c39bc512b42091bf070261e0cafca490e15f8566be521e0745b9d93d not found: ID does not exist" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.336202 4919 scope.go:117] "RemoveContainer" containerID="7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9" Jan 09 13:51:21 crc kubenswrapper[4919]: E0109 13:51:21.336523 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9\": container with ID starting with 7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9 not found: ID does not exist" containerID="7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.336546 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9"} err="failed to get container status \"7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9\": rpc error: code = NotFound desc = could not find container \"7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9\": container with ID starting with 7e34acba07f14866466b2f7f61731407a12344eef285279b90d6bb474bb933c9 not found: ID does not exist" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.364642 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data" (OuterVolumeSpecName: "config-data") pod "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" (UID: "0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.379571 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.594994 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.607222 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.628923 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 13:51:21 crc kubenswrapper[4919]: E0109 13:51:21.629581 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c985555-77df-4e8b-a2b0-f1127eab2886" containerName="neutron-api" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.629663 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c985555-77df-4e8b-a2b0-f1127eab2886" containerName="neutron-api" Jan 09 13:51:21 crc kubenswrapper[4919]: E0109 13:51:21.629740 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c985555-77df-4e8b-a2b0-f1127eab2886" containerName="neutron-httpd" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.629801 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c985555-77df-4e8b-a2b0-f1127eab2886" containerName="neutron-httpd" Jan 09 13:51:21 crc kubenswrapper[4919]: E0109 13:51:21.629891 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" containerName="probe" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.629956 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" containerName="probe" Jan 09 13:51:21 crc kubenswrapper[4919]: E0109 13:51:21.630018 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" containerName="cinder-scheduler" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.630073 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" containerName="cinder-scheduler" Jan 09 13:51:21 crc kubenswrapper[4919]: E0109 13:51:21.630135 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4702a56c-301a-472f-b539-aa0873b1bdd1" containerName="init" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.630189 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4702a56c-301a-472f-b539-aa0873b1bdd1" containerName="init" Jan 09 13:51:21 crc kubenswrapper[4919]: E0109 13:51:21.630288 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4702a56c-301a-472f-b539-aa0873b1bdd1" containerName="dnsmasq-dns" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.630353 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="4702a56c-301a-472f-b539-aa0873b1bdd1" containerName="dnsmasq-dns" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.630677 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" containerName="probe" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.630751 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c985555-77df-4e8b-a2b0-f1127eab2886" containerName="neutron-api" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.630808 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="4702a56c-301a-472f-b539-aa0873b1bdd1" containerName="dnsmasq-dns" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.630875 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" containerName="cinder-scheduler" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.630940 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c985555-77df-4e8b-a2b0-f1127eab2886" containerName="neutron-httpd" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.632777 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.634586 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.648974 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.725975 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.727447 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.729771 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.729987 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bkgcb" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.732501 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.739152 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.786717 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.786853 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmwj8\" (UniqueName: \"kubernetes.io/projected/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-kube-api-access-lmwj8\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.786904 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.786995 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.787014 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-scripts\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.787077 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-config-data\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.888821 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.888889 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmwj8\" (UniqueName: \"kubernetes.io/projected/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-kube-api-access-lmwj8\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.888915 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284d399b-7c07-4e99-9a95-32d600fab162-combined-ca-bundle\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.888941 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.888961 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/284d399b-7c07-4e99-9a95-32d600fab162-openstack-config\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.888989 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/284d399b-7c07-4e99-9a95-32d600fab162-openstack-config-secret\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.889016 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6qjc\" (UniqueName: \"kubernetes.io/projected/284d399b-7c07-4e99-9a95-32d600fab162-kube-api-access-p6qjc\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.889093 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.889119 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-scripts\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.889172 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-config-data\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.889864 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.893823 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-scripts\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.894281 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-config-data\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.898002 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.898123 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.914944 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmwj8\" (UniqueName: \"kubernetes.io/projected/9637b6f9-f7a2-4056-b9ae-87b4af7e475e-kube-api-access-lmwj8\") pod \"cinder-scheduler-0\" (UID: \"9637b6f9-f7a2-4056-b9ae-87b4af7e475e\") " pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.949846 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.951549 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.993587 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284d399b-7c07-4e99-9a95-32d600fab162-combined-ca-bundle\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.993868 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/284d399b-7c07-4e99-9a95-32d600fab162-openstack-config\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.993896 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/284d399b-7c07-4e99-9a95-32d600fab162-openstack-config-secret\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.993916 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6qjc\" (UniqueName: \"kubernetes.io/projected/284d399b-7c07-4e99-9a95-32d600fab162-kube-api-access-p6qjc\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:21 crc kubenswrapper[4919]: I0109 13:51:21.997057 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/284d399b-7c07-4e99-9a95-32d600fab162-openstack-config\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:22 crc kubenswrapper[4919]: I0109 13:51:22.017817 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/284d399b-7c07-4e99-9a95-32d600fab162-openstack-config-secret\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:22 crc kubenswrapper[4919]: I0109 13:51:22.020845 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284d399b-7c07-4e99-9a95-32d600fab162-combined-ca-bundle\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:22 crc kubenswrapper[4919]: I0109 13:51:22.038478 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6qjc\" (UniqueName: \"kubernetes.io/projected/284d399b-7c07-4e99-9a95-32d600fab162-kube-api-access-p6qjc\") pod \"openstackclient\" (UID: \"284d399b-7c07-4e99-9a95-32d600fab162\") " pod="openstack/openstackclient" Jan 09 13:51:22 crc kubenswrapper[4919]: I0109 13:51:22.042034 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 13:51:22 crc kubenswrapper[4919]: I0109 13:51:22.584781 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 13:51:22 crc kubenswrapper[4919]: W0109 13:51:22.588516 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9637b6f9_f7a2_4056_b9ae_87b4af7e475e.slice/crio-c8a933ae74c1669f7ce2bc8dfd2974c4f6bb0100c5a159e44634ca1632da9f1b WatchSource:0}: Error finding container c8a933ae74c1669f7ce2bc8dfd2974c4f6bb0100c5a159e44634ca1632da9f1b: Status 404 returned error can't find the container with id c8a933ae74c1669f7ce2bc8dfd2974c4f6bb0100c5a159e44634ca1632da9f1b Jan 09 13:51:22 crc kubenswrapper[4919]: I0109 13:51:22.716143 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 13:51:22 crc kubenswrapper[4919]: I0109 13:51:22.720289 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:22 crc kubenswrapper[4919]: W0109 13:51:22.740359 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod284d399b_7c07_4e99_9a95_32d600fab162.slice/crio-bf7f94b1ecc622ff4356de95a6cd055179a8f742778d9712675309b371c0e7b7 WatchSource:0}: Error finding container bf7f94b1ecc622ff4356de95a6cd055179a8f742778d9712675309b371c0e7b7: Status 404 returned error can't find the container with id bf7f94b1ecc622ff4356de95a6cd055179a8f742778d9712675309b371c0e7b7 Jan 09 13:51:22 crc kubenswrapper[4919]: I0109 13:51:22.780909 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9" path="/var/lib/kubelet/pods/0ae9b2b4-5dee-45e6-8eb4-2160ea8812b9/volumes" Jan 09 13:51:22 crc kubenswrapper[4919]: I0109 13:51:22.905806 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56f5497b64-ws7gk" Jan 09 13:51:23 crc kubenswrapper[4919]: I0109 13:51:23.001019 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84c89c8f4-klmnp"] Jan 09 13:51:23 crc kubenswrapper[4919]: I0109 13:51:23.001309 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84c89c8f4-klmnp" podUID="865625ee-ff29-4253-9398-c497da20c784" containerName="barbican-api-log" containerID="cri-o://68a6f027203bcb9adb1da5b237987a47d650ae659caaff0409dfb6b315ed2c70" gracePeriod=30 Jan 09 13:51:23 crc kubenswrapper[4919]: I0109 13:51:23.001763 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84c89c8f4-klmnp" podUID="865625ee-ff29-4253-9398-c497da20c784" containerName="barbican-api" containerID="cri-o://1952208d789073252e6b5946b90b309d41be21ccb4e89c5b17482e2f673f6c86" gracePeriod=30 Jan 09 13:51:23 crc kubenswrapper[4919]: I0109 13:51:23.344602 4919 generic.go:334] "Generic (PLEG): container finished" podID="865625ee-ff29-4253-9398-c497da20c784" containerID="68a6f027203bcb9adb1da5b237987a47d650ae659caaff0409dfb6b315ed2c70" exitCode=143 Jan 09 13:51:23 crc kubenswrapper[4919]: I0109 13:51:23.344875 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c89c8f4-klmnp" event={"ID":"865625ee-ff29-4253-9398-c497da20c784","Type":"ContainerDied","Data":"68a6f027203bcb9adb1da5b237987a47d650ae659caaff0409dfb6b315ed2c70"} Jan 09 13:51:23 crc kubenswrapper[4919]: I0109 13:51:23.347857 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"284d399b-7c07-4e99-9a95-32d600fab162","Type":"ContainerStarted","Data":"bf7f94b1ecc622ff4356de95a6cd055179a8f742778d9712675309b371c0e7b7"} Jan 09 13:51:23 crc kubenswrapper[4919]: I0109 13:51:23.349315 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9637b6f9-f7a2-4056-b9ae-87b4af7e475e","Type":"ContainerStarted","Data":"c8a933ae74c1669f7ce2bc8dfd2974c4f6bb0100c5a159e44634ca1632da9f1b"} Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.104226 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5f95dfdc65-kz6rq"] Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.107376 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.110254 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.111545 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.111788 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.121494 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5f95dfdc65-kz6rq"] Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.283593 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e09e5f52-5a74-4a7c-bd84-079835a21fec-etc-swift\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.283633 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09e5f52-5a74-4a7c-bd84-079835a21fec-run-httpd\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.283699 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09e5f52-5a74-4a7c-bd84-079835a21fec-log-httpd\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.283734 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-config-data\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.283782 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-public-tls-certs\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.283823 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-internal-tls-certs\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.283839 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-combined-ca-bundle\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.283871 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwlx\" (UniqueName: \"kubernetes.io/projected/e09e5f52-5a74-4a7c-bd84-079835a21fec-kube-api-access-vjwlx\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.337735 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7bdd978ccd-tx6fx" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.368119 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9637b6f9-f7a2-4056-b9ae-87b4af7e475e","Type":"ContainerStarted","Data":"0d340999c91631056dac89e663dffb04ed348024dd0ad7912cf0f763796f723d"} Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.385489 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e09e5f52-5a74-4a7c-bd84-079835a21fec-etc-swift\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.385547 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09e5f52-5a74-4a7c-bd84-079835a21fec-run-httpd\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.385611 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09e5f52-5a74-4a7c-bd84-079835a21fec-log-httpd\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.385644 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-config-data\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.385690 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-public-tls-certs\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.385730 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-internal-tls-certs\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.385752 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-combined-ca-bundle\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.385782 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwlx\" (UniqueName: \"kubernetes.io/projected/e09e5f52-5a74-4a7c-bd84-079835a21fec-kube-api-access-vjwlx\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.386322 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09e5f52-5a74-4a7c-bd84-079835a21fec-run-httpd\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.386850 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09e5f52-5a74-4a7c-bd84-079835a21fec-log-httpd\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.405246 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-combined-ca-bundle\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.415653 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-public-tls-certs\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.415906 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e09e5f52-5a74-4a7c-bd84-079835a21fec-etc-swift\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.419200 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-internal-tls-certs\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.426565 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09e5f52-5a74-4a7c-bd84-079835a21fec-config-data\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.436658 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwlx\" (UniqueName: \"kubernetes.io/projected/e09e5f52-5a74-4a7c-bd84-079835a21fec-kube-api-access-vjwlx\") pod \"swift-proxy-5f95dfdc65-kz6rq\" (UID: \"e09e5f52-5a74-4a7c-bd84-079835a21fec\") " pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.464148 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.550342 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:51:24 crc kubenswrapper[4919]: I0109 13:51:24.553454 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74bbf9c4b-kjq9x" Jan 09 13:51:25 crc kubenswrapper[4919]: I0109 13:51:25.099437 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5f95dfdc65-kz6rq"] Jan 09 13:51:25 crc kubenswrapper[4919]: I0109 13:51:25.401691 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9637b6f9-f7a2-4056-b9ae-87b4af7e475e","Type":"ContainerStarted","Data":"a9ef98f97813b2ab18b124dc71d858d2783965d70b5d49ab51cd92677b352984"} Jan 09 13:51:25 crc kubenswrapper[4919]: I0109 13:51:25.411163 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" event={"ID":"e09e5f52-5a74-4a7c-bd84-079835a21fec","Type":"ContainerStarted","Data":"7a0ad4a96cd7782696f5ed6853cb9d65885a81c94b4cf353596d62ac7086f13e"} Jan 09 13:51:25 crc kubenswrapper[4919]: I0109 13:51:25.437004 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.436978322 podStartE2EDuration="4.436978322s" podCreationTimestamp="2026-01-09 13:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:25.428714806 +0000 UTC m=+1264.976554256" watchObservedRunningTime="2026-01-09 13:51:25.436978322 +0000 UTC m=+1264.984817772" Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.242642 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84c89c8f4-klmnp" podUID="865625ee-ff29-4253-9398-c497da20c784" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:42996->10.217.0.167:9311: read: connection reset by peer" Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.242682 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84c89c8f4-klmnp" podUID="865625ee-ff29-4253-9398-c497da20c784" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:43012->10.217.0.167:9311: read: connection reset by peer" Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.433101 4919 generic.go:334] "Generic (PLEG): container finished" podID="865625ee-ff29-4253-9398-c497da20c784" containerID="1952208d789073252e6b5946b90b309d41be21ccb4e89c5b17482e2f673f6c86" exitCode=0 Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.433194 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c89c8f4-klmnp" event={"ID":"865625ee-ff29-4253-9398-c497da20c784","Type":"ContainerDied","Data":"1952208d789073252e6b5946b90b309d41be21ccb4e89c5b17482e2f673f6c86"} Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.438503 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" event={"ID":"e09e5f52-5a74-4a7c-bd84-079835a21fec","Type":"ContainerStarted","Data":"d94c4bbe139dfe372e8ac8c3e7ea9fafaf4d1f399cf9fd220421c91a2638698e"} Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.438579 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" event={"ID":"e09e5f52-5a74-4a7c-bd84-079835a21fec","Type":"ContainerStarted","Data":"afe05e5fdfd48732ee55574adf72734cb05329e31d324278881467c8c7fdd8c6"} Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.480826 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" podStartSLOduration=2.480785456 podStartE2EDuration="2.480785456s" podCreationTimestamp="2026-01-09 13:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:26.469354351 +0000 UTC m=+1266.017193801" watchObservedRunningTime="2026-01-09 13:51:26.480785456 +0000 UTC m=+1266.028624906" Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.862422 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.951701 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.958639 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/865625ee-ff29-4253-9398-c497da20c784-logs\") pod \"865625ee-ff29-4253-9398-c497da20c784\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.958743 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh5wr\" (UniqueName: \"kubernetes.io/projected/865625ee-ff29-4253-9398-c497da20c784-kube-api-access-vh5wr\") pod \"865625ee-ff29-4253-9398-c497da20c784\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.958778 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data\") pod \"865625ee-ff29-4253-9398-c497da20c784\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.958809 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data-custom\") pod \"865625ee-ff29-4253-9398-c497da20c784\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.958980 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-combined-ca-bundle\") pod \"865625ee-ff29-4253-9398-c497da20c784\" (UID: \"865625ee-ff29-4253-9398-c497da20c784\") " Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.960314 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/865625ee-ff29-4253-9398-c497da20c784-logs" (OuterVolumeSpecName: "logs") pod "865625ee-ff29-4253-9398-c497da20c784" (UID: "865625ee-ff29-4253-9398-c497da20c784"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.970077 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "865625ee-ff29-4253-9398-c497da20c784" (UID: "865625ee-ff29-4253-9398-c497da20c784"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:26 crc kubenswrapper[4919]: I0109 13:51:26.970510 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865625ee-ff29-4253-9398-c497da20c784-kube-api-access-vh5wr" (OuterVolumeSpecName: "kube-api-access-vh5wr") pod "865625ee-ff29-4253-9398-c497da20c784" (UID: "865625ee-ff29-4253-9398-c497da20c784"). InnerVolumeSpecName "kube-api-access-vh5wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.007499 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "865625ee-ff29-4253-9398-c497da20c784" (UID: "865625ee-ff29-4253-9398-c497da20c784"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.030034 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data" (OuterVolumeSpecName: "config-data") pod "865625ee-ff29-4253-9398-c497da20c784" (UID: "865625ee-ff29-4253-9398-c497da20c784"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.061686 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh5wr\" (UniqueName: \"kubernetes.io/projected/865625ee-ff29-4253-9398-c497da20c784-kube-api-access-vh5wr\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.061722 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.061735 4919 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.061746 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865625ee-ff29-4253-9398-c497da20c784-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.061756 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/865625ee-ff29-4253-9398-c497da20c784-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.453068 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84c89c8f4-klmnp" event={"ID":"865625ee-ff29-4253-9398-c497da20c784","Type":"ContainerDied","Data":"cefadf531b652c8ba701e9d027038a18036b25de8369d7f404ee7bfa08d0768d"} Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.453534 4919 scope.go:117] "RemoveContainer" containerID="1952208d789073252e6b5946b90b309d41be21ccb4e89c5b17482e2f673f6c86" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.453133 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84c89c8f4-klmnp" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.453639 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.453700 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.478680 4919 scope.go:117] "RemoveContainer" containerID="68a6f027203bcb9adb1da5b237987a47d650ae659caaff0409dfb6b315ed2c70" Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.493458 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84c89c8f4-klmnp"] Jan 09 13:51:27 crc kubenswrapper[4919]: I0109 13:51:27.503453 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-84c89c8f4-klmnp"] Jan 09 13:51:28 crc kubenswrapper[4919]: I0109 13:51:28.771076 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="865625ee-ff29-4253-9398-c497da20c784" path="/var/lib/kubelet/pods/865625ee-ff29-4253-9398-c497da20c784/volumes" Jan 09 13:51:32 crc kubenswrapper[4919]: I0109 13:51:32.230702 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 09 13:51:33 crc kubenswrapper[4919]: I0109 13:51:33.538512 4919 generic.go:334] "Generic (PLEG): container finished" podID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerID="9ea28ac6289796bb315c4ba1066c6b3fbe2b9be360102a8d9c166e7e30fa123a" exitCode=137 Jan 09 13:51:33 crc kubenswrapper[4919]: I0109 13:51:33.538686 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2","Type":"ContainerDied","Data":"9ea28ac6289796bb315c4ba1066c6b3fbe2b9be360102a8d9c166e7e30fa123a"} Jan 09 13:51:34 crc kubenswrapper[4919]: I0109 13:51:34.337742 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7bdd978ccd-tx6fx" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 09 13:51:34 crc kubenswrapper[4919]: I0109 13:51:34.337865 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:51:34 crc kubenswrapper[4919]: I0109 13:51:34.477195 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:34 crc kubenswrapper[4919]: I0109 13:51:34.479881 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" Jan 09 13:51:34 crc kubenswrapper[4919]: I0109 13:51:34.789751 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.921711 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-sg-core-conf-yaml\") pod \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.921802 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-run-httpd\") pod \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.921868 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj4qx\" (UniqueName: \"kubernetes.io/projected/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-kube-api-access-qj4qx\") pod \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.921898 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-log-httpd\") pod \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.921921 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-config-data\") pod \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.922020 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-combined-ca-bundle\") pod \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.922049 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-scripts\") pod \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\" (UID: \"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2\") " Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.922713 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" (UID: "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.923117 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" (UID: "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.923596 4919 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:34.923608 4919 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.417376 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-kube-api-access-qj4qx" (OuterVolumeSpecName: "kube-api-access-qj4qx") pod "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" (UID: "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2"). InnerVolumeSpecName "kube-api-access-qj4qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.417593 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-scripts" (OuterVolumeSpecName: "scripts") pod "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" (UID: "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.433760 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj4qx\" (UniqueName: \"kubernetes.io/projected/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-kube-api-access-qj4qx\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.433807 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.434801 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" (UID: "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.484709 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" (UID: "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.490879 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-config-data" (OuterVolumeSpecName: "config-data") pod "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" (UID: "ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.535845 4919 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.536156 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.536168 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.586734 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"284d399b-7c07-4e99-9a95-32d600fab162","Type":"ContainerStarted","Data":"4ccb6cdc56953e22100bc95c90e9f1858eec588d90eddd0cd68c55dcdaeca9ea"} Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.590848 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2","Type":"ContainerDied","Data":"ffc03854ca52909d1d132a502860c7561dde167bfec904951f573369ff08f806"} Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.590908 4919 scope.go:117] "RemoveContainer" containerID="9ea28ac6289796bb315c4ba1066c6b3fbe2b9be360102a8d9c166e7e30fa123a" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.590908 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.611298 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.878073127 podStartE2EDuration="14.61127214s" podCreationTimestamp="2026-01-09 13:51:21 +0000 UTC" firstStartedPulling="2026-01-09 13:51:22.755664844 +0000 UTC m=+1262.303504294" lastFinishedPulling="2026-01-09 13:51:34.488863857 +0000 UTC m=+1274.036703307" observedRunningTime="2026-01-09 13:51:35.603104446 +0000 UTC m=+1275.150943896" watchObservedRunningTime="2026-01-09 13:51:35.61127214 +0000 UTC m=+1275.159111590" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.611853 4919 scope.go:117] "RemoveContainer" containerID="843d5a3226a10c1031b29bd041c3a0b80a659a9972449745f80c453e0d0dd7d3" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.634300 4919 scope.go:117] "RemoveContainer" containerID="e9e27490ca5cceadd32c796cb2dfb1ec9b49b2b17c3d9a47c725454b662ce14f" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.635190 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.645458 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.663347 4919 scope.go:117] "RemoveContainer" containerID="bf9b7f9a1d727c6b93dc2c2db21aad00674c0c5e4b9f563d3bec4ed53f66dab4" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.666296 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:35 crc kubenswrapper[4919]: E0109 13:51:35.666721 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="sg-core" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.666738 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="sg-core" Jan 09 13:51:35 crc kubenswrapper[4919]: E0109 13:51:35.666756 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865625ee-ff29-4253-9398-c497da20c784" containerName="barbican-api" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.666764 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="865625ee-ff29-4253-9398-c497da20c784" containerName="barbican-api" Jan 09 13:51:35 crc kubenswrapper[4919]: E0109 13:51:35.666779 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="ceilometer-central-agent" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.666784 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="ceilometer-central-agent" Jan 09 13:51:35 crc kubenswrapper[4919]: E0109 13:51:35.666798 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865625ee-ff29-4253-9398-c497da20c784" containerName="barbican-api-log" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.666804 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="865625ee-ff29-4253-9398-c497da20c784" containerName="barbican-api-log" Jan 09 13:51:35 crc kubenswrapper[4919]: E0109 13:51:35.666820 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="proxy-httpd" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.666826 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="proxy-httpd" Jan 09 13:51:35 crc kubenswrapper[4919]: E0109 13:51:35.666836 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="ceilometer-notification-agent" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.666841 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="ceilometer-notification-agent" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.667032 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="ceilometer-central-agent" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.667047 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="865625ee-ff29-4253-9398-c497da20c784" containerName="barbican-api-log" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.667055 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="sg-core" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.667064 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="ceilometer-notification-agent" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.667074 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" containerName="proxy-httpd" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.667084 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="865625ee-ff29-4253-9398-c497da20c784" containerName="barbican-api" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.668980 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.671640 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.672280 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.704352 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.740502 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.740553 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-scripts\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.740580 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl5jl\" (UniqueName: \"kubernetes.io/projected/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-kube-api-access-bl5jl\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.740632 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-config-data\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.740653 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-log-httpd\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.740688 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-run-httpd\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.740789 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.842367 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-run-httpd\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.842474 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.842561 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.842581 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-scripts\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.842599 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl5jl\" (UniqueName: \"kubernetes.io/projected/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-kube-api-access-bl5jl\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.842654 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-config-data\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.842679 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-log-httpd\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.843385 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-log-httpd\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.843611 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-run-httpd\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.847138 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.848073 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-config-data\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.864650 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.866692 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-scripts\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:35 crc kubenswrapper[4919]: I0109 13:51:35.867043 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl5jl\" (UniqueName: \"kubernetes.io/projected/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-kube-api-access-bl5jl\") pod \"ceilometer-0\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " pod="openstack/ceilometer-0" Jan 09 13:51:36 crc kubenswrapper[4919]: I0109 13:51:36.000465 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:51:36 crc kubenswrapper[4919]: I0109 13:51:36.476597 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:36 crc kubenswrapper[4919]: W0109 13:51:36.482192 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca85ff2c_1d91_4e4b_9030_4bfda0c05206.slice/crio-47a943e4f1ec50195eb7cc1c5abfee4ba8b03ac98fa046ec3e6de1258879717c WatchSource:0}: Error finding container 47a943e4f1ec50195eb7cc1c5abfee4ba8b03ac98fa046ec3e6de1258879717c: Status 404 returned error can't find the container with id 47a943e4f1ec50195eb7cc1c5abfee4ba8b03ac98fa046ec3e6de1258879717c Jan 09 13:51:36 crc kubenswrapper[4919]: I0109 13:51:36.602989 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca85ff2c-1d91-4e4b-9030-4bfda0c05206","Type":"ContainerStarted","Data":"47a943e4f1ec50195eb7cc1c5abfee4ba8b03ac98fa046ec3e6de1258879717c"} Jan 09 13:51:36 crc kubenswrapper[4919]: I0109 13:51:36.763988 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2" path="/var/lib/kubelet/pods/ed4e3e1c-be8d-4aa2-b493-4fa64a6a1fd2/volumes" Jan 09 13:51:37 crc kubenswrapper[4919]: I0109 13:51:37.611848 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca85ff2c-1d91-4e4b-9030-4bfda0c05206","Type":"ContainerStarted","Data":"d144e2f6aeac25655f18ee8db70b66149f117887c556890899bc6a84232b3289"} Jan 09 13:51:38 crc kubenswrapper[4919]: I0109 13:51:38.620988 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca85ff2c-1d91-4e4b-9030-4bfda0c05206","Type":"ContainerStarted","Data":"fe632bc6eb7848f1f2114fbcaac7e7633be6abc89038650c2da027e7846e8600"} Jan 09 13:51:38 crc kubenswrapper[4919]: I0109 13:51:38.851524 4919 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0f94fda4-49d1-4ca0-b5d0-e062ce94a042"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0f94fda4-49d1-4ca0-b5d0-e062ce94a042] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0f94fda4_49d1_4ca0_b5d0_e062ce94a042.slice" Jan 09 13:51:39 crc kubenswrapper[4919]: I0109 13:51:39.141875 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 13:51:39 crc kubenswrapper[4919]: I0109 13:51:39.630310 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca85ff2c-1d91-4e4b-9030-4bfda0c05206","Type":"ContainerStarted","Data":"2e2ffb17d6e90152a71c772aada2d5fdce41de196767028fa5b710de99048775"} Jan 09 13:51:40 crc kubenswrapper[4919]: E0109 13:51:40.150890 4919 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod158e1b10_ad5e_4a44_a3be_630a2d45bfdc.slice/crio-conmon-6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153.scope\": RecentStats: unable to find data in memory cache]" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.476685 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.541231 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-combined-ca-bundle\") pod \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.541337 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-secret-key\") pod \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.541371 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-logs\") pod \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.541405 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-tls-certs\") pod \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.541471 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p45p5\" (UniqueName: \"kubernetes.io/projected/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-kube-api-access-p45p5\") pod \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.541635 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-config-data\") pod \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.541657 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-scripts\") pod \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\" (UID: \"158e1b10-ad5e-4a44-a3be-630a2d45bfdc\") " Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.542259 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-logs" (OuterVolumeSpecName: "logs") pod "158e1b10-ad5e-4a44-a3be-630a2d45bfdc" (UID: "158e1b10-ad5e-4a44-a3be-630a2d45bfdc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.552138 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "158e1b10-ad5e-4a44-a3be-630a2d45bfdc" (UID: "158e1b10-ad5e-4a44-a3be-630a2d45bfdc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.553383 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-kube-api-access-p45p5" (OuterVolumeSpecName: "kube-api-access-p45p5") pod "158e1b10-ad5e-4a44-a3be-630a2d45bfdc" (UID: "158e1b10-ad5e-4a44-a3be-630a2d45bfdc"). InnerVolumeSpecName "kube-api-access-p45p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.621953 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-config-data" (OuterVolumeSpecName: "config-data") pod "158e1b10-ad5e-4a44-a3be-630a2d45bfdc" (UID: "158e1b10-ad5e-4a44-a3be-630a2d45bfdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.622981 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-scripts" (OuterVolumeSpecName: "scripts") pod "158e1b10-ad5e-4a44-a3be-630a2d45bfdc" (UID: "158e1b10-ad5e-4a44-a3be-630a2d45bfdc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.625495 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "158e1b10-ad5e-4a44-a3be-630a2d45bfdc" (UID: "158e1b10-ad5e-4a44-a3be-630a2d45bfdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.653414 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p45p5\" (UniqueName: \"kubernetes.io/projected/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-kube-api-access-p45p5\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.653444 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.653459 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.653468 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.653476 4919 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.653484 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.657078 4919 generic.go:334] "Generic (PLEG): container finished" podID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerID="6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153" exitCode=137 Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.657169 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdd978ccd-tx6fx" event={"ID":"158e1b10-ad5e-4a44-a3be-630a2d45bfdc","Type":"ContainerDied","Data":"6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153"} Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.657317 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdd978ccd-tx6fx" event={"ID":"158e1b10-ad5e-4a44-a3be-630a2d45bfdc","Type":"ContainerDied","Data":"d2ad3ec5faeacbb4096485b8e60aaf5e2eebbfd348c48bca815480941d61b092"} Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.657364 4919 scope.go:117] "RemoveContainer" containerID="5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.657527 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdd978ccd-tx6fx" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.668694 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.681922 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "158e1b10-ad5e-4a44-a3be-630a2d45bfdc" (UID: "158e1b10-ad5e-4a44-a3be-630a2d45bfdc"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.699647 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.892290322 podStartE2EDuration="5.699620203s" podCreationTimestamp="2026-01-09 13:51:35 +0000 UTC" firstStartedPulling="2026-01-09 13:51:36.484867572 +0000 UTC m=+1276.032707022" lastFinishedPulling="2026-01-09 13:51:40.292197453 +0000 UTC m=+1279.840036903" observedRunningTime="2026-01-09 13:51:40.692565547 +0000 UTC m=+1280.240404997" watchObservedRunningTime="2026-01-09 13:51:40.699620203 +0000 UTC m=+1280.247459653" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.755259 4919 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/158e1b10-ad5e-4a44-a3be-630a2d45bfdc-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.848935 4919 scope.go:117] "RemoveContainer" containerID="6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.884239 4919 scope.go:117] "RemoveContainer" containerID="5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64" Jan 09 13:51:40 crc kubenswrapper[4919]: E0109 13:51:40.884931 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64\": container with ID starting with 5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64 not found: ID does not exist" containerID="5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.884986 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64"} err="failed to get container status \"5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64\": rpc error: code = NotFound desc = could not find container \"5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64\": container with ID starting with 5c819616410e56b0be1791f6160f91f8536c75f61179a540a4f44a261b16ac64 not found: ID does not exist" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.885023 4919 scope.go:117] "RemoveContainer" containerID="6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153" Jan 09 13:51:40 crc kubenswrapper[4919]: E0109 13:51:40.885451 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153\": container with ID starting with 6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153 not found: ID does not exist" containerID="6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.885484 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153"} err="failed to get container status \"6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153\": rpc error: code = NotFound desc = could not find container \"6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153\": container with ID starting with 6bc02be1c023954fa281e82eccc50a9262899736d9b2a950140c11a70d979153 not found: ID does not exist" Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.987306 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bdd978ccd-tx6fx"] Jan 09 13:51:40 crc kubenswrapper[4919]: I0109 13:51:40.994668 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bdd978ccd-tx6fx"] Jan 09 13:51:41 crc kubenswrapper[4919]: I0109 13:51:41.678894 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca85ff2c-1d91-4e4b-9030-4bfda0c05206","Type":"ContainerStarted","Data":"5424dac0999699ce39a2afd0dae7100f45f7b68759e885e6081d8a2ad65b4859"} Jan 09 13:51:42 crc kubenswrapper[4919]: I0109 13:51:42.762832 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" path="/var/lib/kubelet/pods/158e1b10-ad5e-4a44-a3be-630a2d45bfdc/volumes" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.458388 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-td64x"] Jan 09 13:51:43 crc kubenswrapper[4919]: E0109 13:51:43.459021 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.459038 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon" Jan 09 13:51:43 crc kubenswrapper[4919]: E0109 13:51:43.459071 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon-log" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.459078 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon-log" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.459277 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon-log" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.459303 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="158e1b10-ad5e-4a44-a3be-630a2d45bfdc" containerName="horizon" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.459888 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-td64x" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.471608 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-td64x"] Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.545711 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dxcpg"] Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.546980 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dxcpg" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.574284 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dxcpg"] Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.629641 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntk8z\" (UniqueName: \"kubernetes.io/projected/2f76563c-d515-4fdf-9011-6612ff2b5665-kube-api-access-ntk8z\") pod \"nova-api-db-create-td64x\" (UID: \"2f76563c-d515-4fdf-9011-6612ff2b5665\") " pod="openstack/nova-api-db-create-td64x" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.629705 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d18dc7-6c07-4aab-b06f-91137d1809b0-operator-scripts\") pod \"nova-cell0-db-create-dxcpg\" (UID: \"f0d18dc7-6c07-4aab-b06f-91137d1809b0\") " pod="openstack/nova-cell0-db-create-dxcpg" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.629749 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtjht\" (UniqueName: \"kubernetes.io/projected/f0d18dc7-6c07-4aab-b06f-91137d1809b0-kube-api-access-mtjht\") pod \"nova-cell0-db-create-dxcpg\" (UID: \"f0d18dc7-6c07-4aab-b06f-91137d1809b0\") " pod="openstack/nova-cell0-db-create-dxcpg" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.629822 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f76563c-d515-4fdf-9011-6612ff2b5665-operator-scripts\") pod \"nova-api-db-create-td64x\" (UID: \"2f76563c-d515-4fdf-9011-6612ff2b5665\") " pod="openstack/nova-api-db-create-td64x" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.650190 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bsnzf"] Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.651461 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bsnzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.660254 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1600-account-create-update-hvtzf"] Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.661632 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1600-account-create-update-hvtzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.664350 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.670041 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bsnzf"] Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.688016 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1600-account-create-update-hvtzf"] Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.733268 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntk8z\" (UniqueName: \"kubernetes.io/projected/2f76563c-d515-4fdf-9011-6612ff2b5665-kube-api-access-ntk8z\") pod \"nova-api-db-create-td64x\" (UID: \"2f76563c-d515-4fdf-9011-6612ff2b5665\") " pod="openstack/nova-api-db-create-td64x" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.733340 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d18dc7-6c07-4aab-b06f-91137d1809b0-operator-scripts\") pod \"nova-cell0-db-create-dxcpg\" (UID: \"f0d18dc7-6c07-4aab-b06f-91137d1809b0\") " pod="openstack/nova-cell0-db-create-dxcpg" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.733379 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtjht\" (UniqueName: \"kubernetes.io/projected/f0d18dc7-6c07-4aab-b06f-91137d1809b0-kube-api-access-mtjht\") pod \"nova-cell0-db-create-dxcpg\" (UID: \"f0d18dc7-6c07-4aab-b06f-91137d1809b0\") " pod="openstack/nova-cell0-db-create-dxcpg" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.733451 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f76563c-d515-4fdf-9011-6612ff2b5665-operator-scripts\") pod \"nova-api-db-create-td64x\" (UID: \"2f76563c-d515-4fdf-9011-6612ff2b5665\") " pod="openstack/nova-api-db-create-td64x" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.734551 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d18dc7-6c07-4aab-b06f-91137d1809b0-operator-scripts\") pod \"nova-cell0-db-create-dxcpg\" (UID: \"f0d18dc7-6c07-4aab-b06f-91137d1809b0\") " pod="openstack/nova-cell0-db-create-dxcpg" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.737823 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f76563c-d515-4fdf-9011-6612ff2b5665-operator-scripts\") pod \"nova-api-db-create-td64x\" (UID: \"2f76563c-d515-4fdf-9011-6612ff2b5665\") " pod="openstack/nova-api-db-create-td64x" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.754836 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtjht\" (UniqueName: \"kubernetes.io/projected/f0d18dc7-6c07-4aab-b06f-91137d1809b0-kube-api-access-mtjht\") pod \"nova-cell0-db-create-dxcpg\" (UID: \"f0d18dc7-6c07-4aab-b06f-91137d1809b0\") " pod="openstack/nova-cell0-db-create-dxcpg" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.756895 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntk8z\" (UniqueName: \"kubernetes.io/projected/2f76563c-d515-4fdf-9011-6612ff2b5665-kube-api-access-ntk8z\") pod \"nova-api-db-create-td64x\" (UID: \"2f76563c-d515-4fdf-9011-6612ff2b5665\") " pod="openstack/nova-api-db-create-td64x" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.828884 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-td64x" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.837791 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpjk4\" (UniqueName: \"kubernetes.io/projected/ea5378b8-a527-4f7f-b55a-48590aae7ff1-kube-api-access-zpjk4\") pod \"nova-cell1-db-create-bsnzf\" (UID: \"ea5378b8-a527-4f7f-b55a-48590aae7ff1\") " pod="openstack/nova-cell1-db-create-bsnzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.837859 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5378b8-a527-4f7f-b55a-48590aae7ff1-operator-scripts\") pod \"nova-cell1-db-create-bsnzf\" (UID: \"ea5378b8-a527-4f7f-b55a-48590aae7ff1\") " pod="openstack/nova-cell1-db-create-bsnzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.837975 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba162e0-e000-4b80-8a7f-94699ad1c121-operator-scripts\") pod \"nova-api-1600-account-create-update-hvtzf\" (UID: \"fba162e0-e000-4b80-8a7f-94699ad1c121\") " pod="openstack/nova-api-1600-account-create-update-hvtzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.838012 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfnbb\" (UniqueName: \"kubernetes.io/projected/fba162e0-e000-4b80-8a7f-94699ad1c121-kube-api-access-sfnbb\") pod \"nova-api-1600-account-create-update-hvtzf\" (UID: \"fba162e0-e000-4b80-8a7f-94699ad1c121\") " pod="openstack/nova-api-1600-account-create-update-hvtzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.864592 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dxcpg" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.868396 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b660-account-create-update-b5bxk"] Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.870089 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b660-account-create-update-b5bxk" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.877801 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.900450 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b660-account-create-update-b5bxk"] Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.939765 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfnbb\" (UniqueName: \"kubernetes.io/projected/fba162e0-e000-4b80-8a7f-94699ad1c121-kube-api-access-sfnbb\") pod \"nova-api-1600-account-create-update-hvtzf\" (UID: \"fba162e0-e000-4b80-8a7f-94699ad1c121\") " pod="openstack/nova-api-1600-account-create-update-hvtzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.940080 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpjk4\" (UniqueName: \"kubernetes.io/projected/ea5378b8-a527-4f7f-b55a-48590aae7ff1-kube-api-access-zpjk4\") pod \"nova-cell1-db-create-bsnzf\" (UID: \"ea5378b8-a527-4f7f-b55a-48590aae7ff1\") " pod="openstack/nova-cell1-db-create-bsnzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.940129 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5378b8-a527-4f7f-b55a-48590aae7ff1-operator-scripts\") pod \"nova-cell1-db-create-bsnzf\" (UID: \"ea5378b8-a527-4f7f-b55a-48590aae7ff1\") " pod="openstack/nova-cell1-db-create-bsnzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.940191 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba162e0-e000-4b80-8a7f-94699ad1c121-operator-scripts\") pod \"nova-api-1600-account-create-update-hvtzf\" (UID: \"fba162e0-e000-4b80-8a7f-94699ad1c121\") " pod="openstack/nova-api-1600-account-create-update-hvtzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.941015 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba162e0-e000-4b80-8a7f-94699ad1c121-operator-scripts\") pod \"nova-api-1600-account-create-update-hvtzf\" (UID: \"fba162e0-e000-4b80-8a7f-94699ad1c121\") " pod="openstack/nova-api-1600-account-create-update-hvtzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.941900 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5378b8-a527-4f7f-b55a-48590aae7ff1-operator-scripts\") pod \"nova-cell1-db-create-bsnzf\" (UID: \"ea5378b8-a527-4f7f-b55a-48590aae7ff1\") " pod="openstack/nova-cell1-db-create-bsnzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.971091 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfnbb\" (UniqueName: \"kubernetes.io/projected/fba162e0-e000-4b80-8a7f-94699ad1c121-kube-api-access-sfnbb\") pod \"nova-api-1600-account-create-update-hvtzf\" (UID: \"fba162e0-e000-4b80-8a7f-94699ad1c121\") " pod="openstack/nova-api-1600-account-create-update-hvtzf" Jan 09 13:51:43 crc kubenswrapper[4919]: I0109 13:51:43.975929 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpjk4\" (UniqueName: \"kubernetes.io/projected/ea5378b8-a527-4f7f-b55a-48590aae7ff1-kube-api-access-zpjk4\") pod \"nova-cell1-db-create-bsnzf\" (UID: \"ea5378b8-a527-4f7f-b55a-48590aae7ff1\") " pod="openstack/nova-cell1-db-create-bsnzf" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.002492 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1600-account-create-update-hvtzf" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.047408 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2tbt\" (UniqueName: \"kubernetes.io/projected/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-kube-api-access-d2tbt\") pod \"nova-cell0-b660-account-create-update-b5bxk\" (UID: \"e0aa76ff-ed23-4978-8fe0-c0144d775a7a\") " pod="openstack/nova-cell0-b660-account-create-update-b5bxk" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.047602 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-operator-scripts\") pod \"nova-cell0-b660-account-create-update-b5bxk\" (UID: \"e0aa76ff-ed23-4978-8fe0-c0144d775a7a\") " pod="openstack/nova-cell0-b660-account-create-update-b5bxk" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.065291 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-490e-account-create-update-lwv2k"] Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.066510 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490e-account-create-update-lwv2k" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.076843 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.076942 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-490e-account-create-update-lwv2k"] Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.152458 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2tbt\" (UniqueName: \"kubernetes.io/projected/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-kube-api-access-d2tbt\") pod \"nova-cell0-b660-account-create-update-b5bxk\" (UID: \"e0aa76ff-ed23-4978-8fe0-c0144d775a7a\") " pod="openstack/nova-cell0-b660-account-create-update-b5bxk" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.152678 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-operator-scripts\") pod \"nova-cell0-b660-account-create-update-b5bxk\" (UID: \"e0aa76ff-ed23-4978-8fe0-c0144d775a7a\") " pod="openstack/nova-cell0-b660-account-create-update-b5bxk" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.154182 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-operator-scripts\") pod \"nova-cell0-b660-account-create-update-b5bxk\" (UID: \"e0aa76ff-ed23-4978-8fe0-c0144d775a7a\") " pod="openstack/nova-cell0-b660-account-create-update-b5bxk" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.176578 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2tbt\" (UniqueName: \"kubernetes.io/projected/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-kube-api-access-d2tbt\") pod \"nova-cell0-b660-account-create-update-b5bxk\" (UID: \"e0aa76ff-ed23-4978-8fe0-c0144d775a7a\") " pod="openstack/nova-cell0-b660-account-create-update-b5bxk" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.254663 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5333c11-0798-492a-862a-d6c9076a5fe6-operator-scripts\") pod \"nova-cell1-490e-account-create-update-lwv2k\" (UID: \"d5333c11-0798-492a-862a-d6c9076a5fe6\") " pod="openstack/nova-cell1-490e-account-create-update-lwv2k" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.254744 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjbkq\" (UniqueName: \"kubernetes.io/projected/d5333c11-0798-492a-862a-d6c9076a5fe6-kube-api-access-vjbkq\") pod \"nova-cell1-490e-account-create-update-lwv2k\" (UID: \"d5333c11-0798-492a-862a-d6c9076a5fe6\") " pod="openstack/nova-cell1-490e-account-create-update-lwv2k" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.268599 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bsnzf" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.306634 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b660-account-create-update-b5bxk" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.356462 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjbkq\" (UniqueName: \"kubernetes.io/projected/d5333c11-0798-492a-862a-d6c9076a5fe6-kube-api-access-vjbkq\") pod \"nova-cell1-490e-account-create-update-lwv2k\" (UID: \"d5333c11-0798-492a-862a-d6c9076a5fe6\") " pod="openstack/nova-cell1-490e-account-create-update-lwv2k" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.356644 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5333c11-0798-492a-862a-d6c9076a5fe6-operator-scripts\") pod \"nova-cell1-490e-account-create-update-lwv2k\" (UID: \"d5333c11-0798-492a-862a-d6c9076a5fe6\") " pod="openstack/nova-cell1-490e-account-create-update-lwv2k" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.357416 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5333c11-0798-492a-862a-d6c9076a5fe6-operator-scripts\") pod \"nova-cell1-490e-account-create-update-lwv2k\" (UID: \"d5333c11-0798-492a-862a-d6c9076a5fe6\") " pod="openstack/nova-cell1-490e-account-create-update-lwv2k" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.377594 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjbkq\" (UniqueName: \"kubernetes.io/projected/d5333c11-0798-492a-862a-d6c9076a5fe6-kube-api-access-vjbkq\") pod \"nova-cell1-490e-account-create-update-lwv2k\" (UID: \"d5333c11-0798-492a-862a-d6c9076a5fe6\") " pod="openstack/nova-cell1-490e-account-create-update-lwv2k" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.395469 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490e-account-create-update-lwv2k" Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.435648 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-td64x"] Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.534091 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dxcpg"] Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.654628 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1600-account-create-update-hvtzf"] Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.764769 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1600-account-create-update-hvtzf" event={"ID":"fba162e0-e000-4b80-8a7f-94699ad1c121","Type":"ContainerStarted","Data":"ea58a4d7aaa0a8e4d49c7ef0b214ecfd9ffa65e37fb2b41a2ed037c06cbbb3b4"} Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.764822 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-td64x" event={"ID":"2f76563c-d515-4fdf-9011-6612ff2b5665","Type":"ContainerStarted","Data":"28ae6cd5d087fd4b46ce4a9226d5cfe9b5d5d17314cb9a2428aa12639357a92e"} Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.764838 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dxcpg" event={"ID":"f0d18dc7-6c07-4aab-b06f-91137d1809b0","Type":"ContainerStarted","Data":"d32436e7870a96bffeee2a25b9231bc33d8247c4e015a1a67087f5645714979b"} Jan 09 13:51:44 crc kubenswrapper[4919]: I0109 13:51:44.831149 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bsnzf"] Jan 09 13:51:44 crc kubenswrapper[4919]: W0109 13:51:44.844732 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea5378b8_a527_4f7f_b55a_48590aae7ff1.slice/crio-852b01ab26aecc94a5058911931f58dc8153b6ab592067b2e038e02ef647573b WatchSource:0}: Error finding container 852b01ab26aecc94a5058911931f58dc8153b6ab592067b2e038e02ef647573b: Status 404 returned error can't find the container with id 852b01ab26aecc94a5058911931f58dc8153b6ab592067b2e038e02ef647573b Jan 09 13:51:45 crc kubenswrapper[4919]: I0109 13:51:45.018830 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b660-account-create-update-b5bxk"] Jan 09 13:51:45 crc kubenswrapper[4919]: W0109 13:51:45.033968 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0aa76ff_ed23_4978_8fe0_c0144d775a7a.slice/crio-97cc1cae57c24c37ca5b436774b94bd6c4251b5dfa352f8a4142c5a230ff597a WatchSource:0}: Error finding container 97cc1cae57c24c37ca5b436774b94bd6c4251b5dfa352f8a4142c5a230ff597a: Status 404 returned error can't find the container with id 97cc1cae57c24c37ca5b436774b94bd6c4251b5dfa352f8a4142c5a230ff597a Jan 09 13:51:45 crc kubenswrapper[4919]: W0109 13:51:45.182892 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5333c11_0798_492a_862a_d6c9076a5fe6.slice/crio-8ecffa6b7bc236035874167b805e3717c3b1355eeaa1e2043e3bc11ad9b54dc6 WatchSource:0}: Error finding container 8ecffa6b7bc236035874167b805e3717c3b1355eeaa1e2043e3bc11ad9b54dc6: Status 404 returned error can't find the container with id 8ecffa6b7bc236035874167b805e3717c3b1355eeaa1e2043e3bc11ad9b54dc6 Jan 09 13:51:45 crc kubenswrapper[4919]: I0109 13:51:45.182907 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-490e-account-create-update-lwv2k"] Jan 09 13:51:45 crc kubenswrapper[4919]: I0109 13:51:45.773407 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490e-account-create-update-lwv2k" event={"ID":"d5333c11-0798-492a-862a-d6c9076a5fe6","Type":"ContainerStarted","Data":"8ecffa6b7bc236035874167b805e3717c3b1355eeaa1e2043e3bc11ad9b54dc6"} Jan 09 13:51:45 crc kubenswrapper[4919]: I0109 13:51:45.775720 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b660-account-create-update-b5bxk" event={"ID":"e0aa76ff-ed23-4978-8fe0-c0144d775a7a","Type":"ContainerStarted","Data":"97cc1cae57c24c37ca5b436774b94bd6c4251b5dfa352f8a4142c5a230ff597a"} Jan 09 13:51:45 crc kubenswrapper[4919]: I0109 13:51:45.777294 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bsnzf" event={"ID":"ea5378b8-a527-4f7f-b55a-48590aae7ff1","Type":"ContainerStarted","Data":"852b01ab26aecc94a5058911931f58dc8153b6ab592067b2e038e02ef647573b"} Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.799349 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.799885 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="ceilometer-central-agent" containerID="cri-o://d144e2f6aeac25655f18ee8db70b66149f117887c556890899bc6a84232b3289" gracePeriod=30 Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.800332 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="proxy-httpd" containerID="cri-o://5424dac0999699ce39a2afd0dae7100f45f7b68759e885e6081d8a2ad65b4859" gracePeriod=30 Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.800456 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="sg-core" containerID="cri-o://2e2ffb17d6e90152a71c772aada2d5fdce41de196767028fa5b710de99048775" gracePeriod=30 Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.800452 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="ceilometer-notification-agent" containerID="cri-o://fe632bc6eb7848f1f2114fbcaac7e7633be6abc89038650c2da027e7846e8600" gracePeriod=30 Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.824887 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b660-account-create-update-b5bxk" event={"ID":"e0aa76ff-ed23-4978-8fe0-c0144d775a7a","Type":"ContainerStarted","Data":"0c6be3d93024838ec9d2200c3eb1dcb89b5da60928d8ff7910e89c5ddebd5334"} Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.838099 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1600-account-create-update-hvtzf" event={"ID":"fba162e0-e000-4b80-8a7f-94699ad1c121","Type":"ContainerStarted","Data":"e641d4986d457062131c03c85466779c9a0d0deeab44195975d59efb0b697668"} Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.842696 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.846764 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" containerName="glance-log" containerID="cri-o://eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68" gracePeriod=30 Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.847260 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" containerName="glance-httpd" containerID="cri-o://1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90" gracePeriod=30 Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.850458 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-td64x" event={"ID":"2f76563c-d515-4fdf-9011-6612ff2b5665","Type":"ContainerStarted","Data":"2832a878eac12229938b3d5f9d5d660a40ae2a6dbe1ed905d39e74eed5bd3d35"} Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.854565 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-b660-account-create-update-b5bxk" podStartSLOduration=3.854543927 podStartE2EDuration="3.854543927s" podCreationTimestamp="2026-01-09 13:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:46.846589639 +0000 UTC m=+1286.394429079" watchObservedRunningTime="2026-01-09 13:51:46.854543927 +0000 UTC m=+1286.402383377" Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.863828 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dxcpg" event={"ID":"f0d18dc7-6c07-4aab-b06f-91137d1809b0","Type":"ContainerStarted","Data":"d97fa5b1b0175126fee6569134ed337b4981072484c0fff138e95d0067cbd0c0"} Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.870842 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-1600-account-create-update-hvtzf" podStartSLOduration=3.870825012 podStartE2EDuration="3.870825012s" podCreationTimestamp="2026-01-09 13:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:46.869615472 +0000 UTC m=+1286.417454922" watchObservedRunningTime="2026-01-09 13:51:46.870825012 +0000 UTC m=+1286.418664462" Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.874573 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bsnzf" event={"ID":"ea5378b8-a527-4f7f-b55a-48590aae7ff1","Type":"ContainerStarted","Data":"9e77fda6b192837f6b1440b756268f5eca3b103d38e78ae8709ffd42ec4cf6f4"} Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.879639 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490e-account-create-update-lwv2k" event={"ID":"d5333c11-0798-492a-862a-d6c9076a5fe6","Type":"ContainerStarted","Data":"302c4ffbe73f0b7e012dcf6e0e5022508d22d1c34d7d71ba6171b353a4aaa517"} Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.903397 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-bsnzf" podStartSLOduration=3.903373933 podStartE2EDuration="3.903373933s" podCreationTimestamp="2026-01-09 13:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:46.900696627 +0000 UTC m=+1286.448536077" watchObservedRunningTime="2026-01-09 13:51:46.903373933 +0000 UTC m=+1286.451213383" Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.930511 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-dxcpg" podStartSLOduration=3.930488669 podStartE2EDuration="3.930488669s" podCreationTimestamp="2026-01-09 13:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:46.922322665 +0000 UTC m=+1286.470162115" watchObservedRunningTime="2026-01-09 13:51:46.930488669 +0000 UTC m=+1286.478328119" Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.946070 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-td64x" podStartSLOduration=3.946054397 podStartE2EDuration="3.946054397s" podCreationTimestamp="2026-01-09 13:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:46.943678327 +0000 UTC m=+1286.491517767" watchObservedRunningTime="2026-01-09 13:51:46.946054397 +0000 UTC m=+1286.493893847" Jan 09 13:51:46 crc kubenswrapper[4919]: I0109 13:51:46.971949 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-490e-account-create-update-lwv2k" podStartSLOduration=2.971924591 podStartE2EDuration="2.971924591s" podCreationTimestamp="2026-01-09 13:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:46.969581003 +0000 UTC m=+1286.517420453" watchObservedRunningTime="2026-01-09 13:51:46.971924591 +0000 UTC m=+1286.519764041" Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.745686 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.746313 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fce892da-35ae-4435-a61a-1ee629ddb17e" containerName="glance-log" containerID="cri-o://fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0" gracePeriod=30 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.746478 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fce892da-35ae-4435-a61a-1ee629ddb17e" containerName="glance-httpd" containerID="cri-o://85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a" gracePeriod=30 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.890563 4919 generic.go:334] "Generic (PLEG): container finished" podID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" containerID="eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68" exitCode=143 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.890640 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d3d016b-608b-4a81-aeae-7b1e4c75d893","Type":"ContainerDied","Data":"eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68"} Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.892786 4919 generic.go:334] "Generic (PLEG): container finished" podID="f0d18dc7-6c07-4aab-b06f-91137d1809b0" containerID="d97fa5b1b0175126fee6569134ed337b4981072484c0fff138e95d0067cbd0c0" exitCode=0 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.893280 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dxcpg" event={"ID":"f0d18dc7-6c07-4aab-b06f-91137d1809b0","Type":"ContainerDied","Data":"d97fa5b1b0175126fee6569134ed337b4981072484c0fff138e95d0067cbd0c0"} Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.895445 4919 generic.go:334] "Generic (PLEG): container finished" podID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerID="5424dac0999699ce39a2afd0dae7100f45f7b68759e885e6081d8a2ad65b4859" exitCode=0 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.895475 4919 generic.go:334] "Generic (PLEG): container finished" podID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerID="2e2ffb17d6e90152a71c772aada2d5fdce41de196767028fa5b710de99048775" exitCode=2 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.895484 4919 generic.go:334] "Generic (PLEG): container finished" podID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerID="fe632bc6eb7848f1f2114fbcaac7e7633be6abc89038650c2da027e7846e8600" exitCode=0 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.895517 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca85ff2c-1d91-4e4b-9030-4bfda0c05206","Type":"ContainerDied","Data":"5424dac0999699ce39a2afd0dae7100f45f7b68759e885e6081d8a2ad65b4859"} Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.895533 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca85ff2c-1d91-4e4b-9030-4bfda0c05206","Type":"ContainerDied","Data":"2e2ffb17d6e90152a71c772aada2d5fdce41de196767028fa5b710de99048775"} Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.895542 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca85ff2c-1d91-4e4b-9030-4bfda0c05206","Type":"ContainerDied","Data":"fe632bc6eb7848f1f2114fbcaac7e7633be6abc89038650c2da027e7846e8600"} Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.900625 4919 generic.go:334] "Generic (PLEG): container finished" podID="ea5378b8-a527-4f7f-b55a-48590aae7ff1" containerID="9e77fda6b192837f6b1440b756268f5eca3b103d38e78ae8709ffd42ec4cf6f4" exitCode=0 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.900683 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bsnzf" event={"ID":"ea5378b8-a527-4f7f-b55a-48590aae7ff1","Type":"ContainerDied","Data":"9e77fda6b192837f6b1440b756268f5eca3b103d38e78ae8709ffd42ec4cf6f4"} Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.902994 4919 generic.go:334] "Generic (PLEG): container finished" podID="fce892da-35ae-4435-a61a-1ee629ddb17e" containerID="fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0" exitCode=143 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.903079 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce892da-35ae-4435-a61a-1ee629ddb17e","Type":"ContainerDied","Data":"fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0"} Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.904349 4919 generic.go:334] "Generic (PLEG): container finished" podID="d5333c11-0798-492a-862a-d6c9076a5fe6" containerID="302c4ffbe73f0b7e012dcf6e0e5022508d22d1c34d7d71ba6171b353a4aaa517" exitCode=0 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.904452 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490e-account-create-update-lwv2k" event={"ID":"d5333c11-0798-492a-862a-d6c9076a5fe6","Type":"ContainerDied","Data":"302c4ffbe73f0b7e012dcf6e0e5022508d22d1c34d7d71ba6171b353a4aaa517"} Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.906630 4919 generic.go:334] "Generic (PLEG): container finished" podID="e0aa76ff-ed23-4978-8fe0-c0144d775a7a" containerID="0c6be3d93024838ec9d2200c3eb1dcb89b5da60928d8ff7910e89c5ddebd5334" exitCode=0 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.906695 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b660-account-create-update-b5bxk" event={"ID":"e0aa76ff-ed23-4978-8fe0-c0144d775a7a","Type":"ContainerDied","Data":"0c6be3d93024838ec9d2200c3eb1dcb89b5da60928d8ff7910e89c5ddebd5334"} Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.911779 4919 generic.go:334] "Generic (PLEG): container finished" podID="2f76563c-d515-4fdf-9011-6612ff2b5665" containerID="2832a878eac12229938b3d5f9d5d660a40ae2a6dbe1ed905d39e74eed5bd3d35" exitCode=0 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.911890 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-td64x" event={"ID":"2f76563c-d515-4fdf-9011-6612ff2b5665","Type":"ContainerDied","Data":"2832a878eac12229938b3d5f9d5d660a40ae2a6dbe1ed905d39e74eed5bd3d35"} Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.915158 4919 generic.go:334] "Generic (PLEG): container finished" podID="fba162e0-e000-4b80-8a7f-94699ad1c121" containerID="e641d4986d457062131c03c85466779c9a0d0deeab44195975d59efb0b697668" exitCode=0 Jan 09 13:51:47 crc kubenswrapper[4919]: I0109 13:51:47.915227 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1600-account-create-update-hvtzf" event={"ID":"fba162e0-e000-4b80-8a7f-94699ad1c121","Type":"ContainerDied","Data":"e641d4986d457062131c03c85466779c9a0d0deeab44195975d59efb0b697668"} Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.457464 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dxcpg" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.585181 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtjht\" (UniqueName: \"kubernetes.io/projected/f0d18dc7-6c07-4aab-b06f-91137d1809b0-kube-api-access-mtjht\") pod \"f0d18dc7-6c07-4aab-b06f-91137d1809b0\" (UID: \"f0d18dc7-6c07-4aab-b06f-91137d1809b0\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.585288 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d18dc7-6c07-4aab-b06f-91137d1809b0-operator-scripts\") pod \"f0d18dc7-6c07-4aab-b06f-91137d1809b0\" (UID: \"f0d18dc7-6c07-4aab-b06f-91137d1809b0\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.591505 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d18dc7-6c07-4aab-b06f-91137d1809b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0d18dc7-6c07-4aab-b06f-91137d1809b0" (UID: "f0d18dc7-6c07-4aab-b06f-91137d1809b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.599511 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d18dc7-6c07-4aab-b06f-91137d1809b0-kube-api-access-mtjht" (OuterVolumeSpecName: "kube-api-access-mtjht") pod "f0d18dc7-6c07-4aab-b06f-91137d1809b0" (UID: "f0d18dc7-6c07-4aab-b06f-91137d1809b0"). InnerVolumeSpecName "kube-api-access-mtjht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.674655 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1600-account-create-update-hvtzf" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.687527 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d18dc7-6c07-4aab-b06f-91137d1809b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.687752 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtjht\" (UniqueName: \"kubernetes.io/projected/f0d18dc7-6c07-4aab-b06f-91137d1809b0-kube-api-access-mtjht\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.689626 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490e-account-create-update-lwv2k" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.690927 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b660-account-create-update-b5bxk" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.703900 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bsnzf" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.709639 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-td64x" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.788934 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjbkq\" (UniqueName: \"kubernetes.io/projected/d5333c11-0798-492a-862a-d6c9076a5fe6-kube-api-access-vjbkq\") pod \"d5333c11-0798-492a-862a-d6c9076a5fe6\" (UID: \"d5333c11-0798-492a-862a-d6c9076a5fe6\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.789168 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5333c11-0798-492a-862a-d6c9076a5fe6-operator-scripts\") pod \"d5333c11-0798-492a-862a-d6c9076a5fe6\" (UID: \"d5333c11-0798-492a-862a-d6c9076a5fe6\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.789268 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba162e0-e000-4b80-8a7f-94699ad1c121-operator-scripts\") pod \"fba162e0-e000-4b80-8a7f-94699ad1c121\" (UID: \"fba162e0-e000-4b80-8a7f-94699ad1c121\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.789301 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfnbb\" (UniqueName: \"kubernetes.io/projected/fba162e0-e000-4b80-8a7f-94699ad1c121-kube-api-access-sfnbb\") pod \"fba162e0-e000-4b80-8a7f-94699ad1c121\" (UID: \"fba162e0-e000-4b80-8a7f-94699ad1c121\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.789342 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-operator-scripts\") pod \"e0aa76ff-ed23-4978-8fe0-c0144d775a7a\" (UID: \"e0aa76ff-ed23-4978-8fe0-c0144d775a7a\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.789362 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2tbt\" (UniqueName: \"kubernetes.io/projected/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-kube-api-access-d2tbt\") pod \"e0aa76ff-ed23-4978-8fe0-c0144d775a7a\" (UID: \"e0aa76ff-ed23-4978-8fe0-c0144d775a7a\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.791307 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba162e0-e000-4b80-8a7f-94699ad1c121-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fba162e0-e000-4b80-8a7f-94699ad1c121" (UID: "fba162e0-e000-4b80-8a7f-94699ad1c121"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.791876 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5333c11-0798-492a-862a-d6c9076a5fe6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5333c11-0798-492a-862a-d6c9076a5fe6" (UID: "d5333c11-0798-492a-862a-d6c9076a5fe6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.792727 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0aa76ff-ed23-4978-8fe0-c0144d775a7a" (UID: "e0aa76ff-ed23-4978-8fe0-c0144d775a7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.795679 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5333c11-0798-492a-862a-d6c9076a5fe6-kube-api-access-vjbkq" (OuterVolumeSpecName: "kube-api-access-vjbkq") pod "d5333c11-0798-492a-862a-d6c9076a5fe6" (UID: "d5333c11-0798-492a-862a-d6c9076a5fe6"). InnerVolumeSpecName "kube-api-access-vjbkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.797511 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-kube-api-access-d2tbt" (OuterVolumeSpecName: "kube-api-access-d2tbt") pod "e0aa76ff-ed23-4978-8fe0-c0144d775a7a" (UID: "e0aa76ff-ed23-4978-8fe0-c0144d775a7a"). InnerVolumeSpecName "kube-api-access-d2tbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.798527 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba162e0-e000-4b80-8a7f-94699ad1c121-kube-api-access-sfnbb" (OuterVolumeSpecName: "kube-api-access-sfnbb") pod "fba162e0-e000-4b80-8a7f-94699ad1c121" (UID: "fba162e0-e000-4b80-8a7f-94699ad1c121"). InnerVolumeSpecName "kube-api-access-sfnbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.890618 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpjk4\" (UniqueName: \"kubernetes.io/projected/ea5378b8-a527-4f7f-b55a-48590aae7ff1-kube-api-access-zpjk4\") pod \"ea5378b8-a527-4f7f-b55a-48590aae7ff1\" (UID: \"ea5378b8-a527-4f7f-b55a-48590aae7ff1\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.890810 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntk8z\" (UniqueName: \"kubernetes.io/projected/2f76563c-d515-4fdf-9011-6612ff2b5665-kube-api-access-ntk8z\") pod \"2f76563c-d515-4fdf-9011-6612ff2b5665\" (UID: \"2f76563c-d515-4fdf-9011-6612ff2b5665\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.891018 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f76563c-d515-4fdf-9011-6612ff2b5665-operator-scripts\") pod \"2f76563c-d515-4fdf-9011-6612ff2b5665\" (UID: \"2f76563c-d515-4fdf-9011-6612ff2b5665\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.891073 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5378b8-a527-4f7f-b55a-48590aae7ff1-operator-scripts\") pod \"ea5378b8-a527-4f7f-b55a-48590aae7ff1\" (UID: \"ea5378b8-a527-4f7f-b55a-48590aae7ff1\") " Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.891788 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5333c11-0798-492a-862a-d6c9076a5fe6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.891813 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba162e0-e000-4b80-8a7f-94699ad1c121-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.891825 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfnbb\" (UniqueName: \"kubernetes.io/projected/fba162e0-e000-4b80-8a7f-94699ad1c121-kube-api-access-sfnbb\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.891839 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.891851 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2tbt\" (UniqueName: \"kubernetes.io/projected/e0aa76ff-ed23-4978-8fe0-c0144d775a7a-kube-api-access-d2tbt\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.891863 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjbkq\" (UniqueName: \"kubernetes.io/projected/d5333c11-0798-492a-862a-d6c9076a5fe6-kube-api-access-vjbkq\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.893802 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea5378b8-a527-4f7f-b55a-48590aae7ff1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea5378b8-a527-4f7f-b55a-48590aae7ff1" (UID: "ea5378b8-a527-4f7f-b55a-48590aae7ff1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.893998 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f76563c-d515-4fdf-9011-6612ff2b5665-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f76563c-d515-4fdf-9011-6612ff2b5665" (UID: "2f76563c-d515-4fdf-9011-6612ff2b5665"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.894283 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea5378b8-a527-4f7f-b55a-48590aae7ff1-kube-api-access-zpjk4" (OuterVolumeSpecName: "kube-api-access-zpjk4") pod "ea5378b8-a527-4f7f-b55a-48590aae7ff1" (UID: "ea5378b8-a527-4f7f-b55a-48590aae7ff1"). InnerVolumeSpecName "kube-api-access-zpjk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.895726 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f76563c-d515-4fdf-9011-6612ff2b5665-kube-api-access-ntk8z" (OuterVolumeSpecName: "kube-api-access-ntk8z") pod "2f76563c-d515-4fdf-9011-6612ff2b5665" (UID: "2f76563c-d515-4fdf-9011-6612ff2b5665"). InnerVolumeSpecName "kube-api-access-ntk8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.937468 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bsnzf" event={"ID":"ea5378b8-a527-4f7f-b55a-48590aae7ff1","Type":"ContainerDied","Data":"852b01ab26aecc94a5058911931f58dc8153b6ab592067b2e038e02ef647573b"} Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.937502 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bsnzf" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.937523 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="852b01ab26aecc94a5058911931f58dc8153b6ab592067b2e038e02ef647573b" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.939139 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490e-account-create-update-lwv2k" event={"ID":"d5333c11-0798-492a-862a-d6c9076a5fe6","Type":"ContainerDied","Data":"8ecffa6b7bc236035874167b805e3717c3b1355eeaa1e2043e3bc11ad9b54dc6"} Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.939174 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ecffa6b7bc236035874167b805e3717c3b1355eeaa1e2043e3bc11ad9b54dc6" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.939265 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490e-account-create-update-lwv2k" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.942137 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b660-account-create-update-b5bxk" event={"ID":"e0aa76ff-ed23-4978-8fe0-c0144d775a7a","Type":"ContainerDied","Data":"97cc1cae57c24c37ca5b436774b94bd6c4251b5dfa352f8a4142c5a230ff597a"} Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.942182 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97cc1cae57c24c37ca5b436774b94bd6c4251b5dfa352f8a4142c5a230ff597a" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.942274 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b660-account-create-update-b5bxk" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.944622 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1600-account-create-update-hvtzf" event={"ID":"fba162e0-e000-4b80-8a7f-94699ad1c121","Type":"ContainerDied","Data":"ea58a4d7aaa0a8e4d49c7ef0b214ecfd9ffa65e37fb2b41a2ed037c06cbbb3b4"} Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.944914 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea58a4d7aaa0a8e4d49c7ef0b214ecfd9ffa65e37fb2b41a2ed037c06cbbb3b4" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.944885 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1600-account-create-update-hvtzf" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.957045 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-td64x" event={"ID":"2f76563c-d515-4fdf-9011-6612ff2b5665","Type":"ContainerDied","Data":"28ae6cd5d087fd4b46ce4a9226d5cfe9b5d5d17314cb9a2428aa12639357a92e"} Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.957336 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28ae6cd5d087fd4b46ce4a9226d5cfe9b5d5d17314cb9a2428aa12639357a92e" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.957491 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-td64x" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.961462 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dxcpg" event={"ID":"f0d18dc7-6c07-4aab-b06f-91137d1809b0","Type":"ContainerDied","Data":"d32436e7870a96bffeee2a25b9231bc33d8247c4e015a1a67087f5645714979b"} Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.961936 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32436e7870a96bffeee2a25b9231bc33d8247c4e015a1a67087f5645714979b" Jan 09 13:51:49 crc kubenswrapper[4919]: I0109 13:51:49.961691 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dxcpg" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.010751 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntk8z\" (UniqueName: \"kubernetes.io/projected/2f76563c-d515-4fdf-9011-6612ff2b5665-kube-api-access-ntk8z\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.011328 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f76563c-d515-4fdf-9011-6612ff2b5665-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.018288 4919 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5378b8-a527-4f7f-b55a-48590aae7ff1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.018327 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpjk4\" (UniqueName: \"kubernetes.io/projected/ea5378b8-a527-4f7f-b55a-48590aae7ff1-kube-api-access-zpjk4\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.829694 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.936274 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n5mz\" (UniqueName: \"kubernetes.io/projected/0d3d016b-608b-4a81-aeae-7b1e4c75d893-kube-api-access-4n5mz\") pod \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.936337 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-combined-ca-bundle\") pod \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.936437 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-scripts\") pod \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.936463 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-config-data\") pod \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.936491 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.936532 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-httpd-run\") pod \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.936582 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-logs\") pod \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.937448 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-public-tls-certs\") pod \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\" (UID: \"0d3d016b-608b-4a81-aeae-7b1e4c75d893\") " Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.938410 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0d3d016b-608b-4a81-aeae-7b1e4c75d893" (UID: "0d3d016b-608b-4a81-aeae-7b1e4c75d893"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.938578 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-logs" (OuterVolumeSpecName: "logs") pod "0d3d016b-608b-4a81-aeae-7b1e4c75d893" (UID: "0d3d016b-608b-4a81-aeae-7b1e4c75d893"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.938657 4919 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.942575 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3d016b-608b-4a81-aeae-7b1e4c75d893-kube-api-access-4n5mz" (OuterVolumeSpecName: "kube-api-access-4n5mz") pod "0d3d016b-608b-4a81-aeae-7b1e4c75d893" (UID: "0d3d016b-608b-4a81-aeae-7b1e4c75d893"). InnerVolumeSpecName "kube-api-access-4n5mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.945604 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "0d3d016b-608b-4a81-aeae-7b1e4c75d893" (UID: "0d3d016b-608b-4a81-aeae-7b1e4c75d893"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.964475 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-scripts" (OuterVolumeSpecName: "scripts") pod "0d3d016b-608b-4a81-aeae-7b1e4c75d893" (UID: "0d3d016b-608b-4a81-aeae-7b1e4c75d893"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.996025 4919 generic.go:334] "Generic (PLEG): container finished" podID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerID="d144e2f6aeac25655f18ee8db70b66149f117887c556890899bc6a84232b3289" exitCode=0 Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.996055 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0d3d016b-608b-4a81-aeae-7b1e4c75d893" (UID: "0d3d016b-608b-4a81-aeae-7b1e4c75d893"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:50 crc kubenswrapper[4919]: I0109 13:51:50.996128 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca85ff2c-1d91-4e4b-9030-4bfda0c05206","Type":"ContainerDied","Data":"d144e2f6aeac25655f18ee8db70b66149f117887c556890899bc6a84232b3289"} Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.017379 4919 generic.go:334] "Generic (PLEG): container finished" podID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" containerID="1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90" exitCode=0 Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.017427 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d3d016b-608b-4a81-aeae-7b1e4c75d893","Type":"ContainerDied","Data":"1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90"} Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.017457 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d3d016b-608b-4a81-aeae-7b1e4c75d893","Type":"ContainerDied","Data":"c97efa3ef4fc1716baf90c8bc69bea0e368b28aca0b217281ba6b6849c81ab3e"} Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.017478 4919 scope.go:117] "RemoveContainer" containerID="1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.017475 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.028545 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d3d016b-608b-4a81-aeae-7b1e4c75d893" (UID: "0d3d016b-608b-4a81-aeae-7b1e4c75d893"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.036463 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-config-data" (OuterVolumeSpecName: "config-data") pod "0d3d016b-608b-4a81-aeae-7b1e4c75d893" (UID: "0d3d016b-608b-4a81-aeae-7b1e4c75d893"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.068449 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n5mz\" (UniqueName: \"kubernetes.io/projected/0d3d016b-608b-4a81-aeae-7b1e4c75d893-kube-api-access-4n5mz\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.068483 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.068495 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.068506 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.068537 4919 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.068568 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3d016b-608b-4a81-aeae-7b1e4c75d893-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.068579 4919 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d3d016b-608b-4a81-aeae-7b1e4c75d893-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.091849 4919 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.149873 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.159370 4919 scope.go:117] "RemoveContainer" containerID="eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.171342 4919 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.192726 4919 scope.go:117] "RemoveContainer" containerID="1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.193388 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90\": container with ID starting with 1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90 not found: ID does not exist" containerID="1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.193448 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90"} err="failed to get container status \"1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90\": rpc error: code = NotFound desc = could not find container \"1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90\": container with ID starting with 1b753bd12d8bf0c44d1d07bd89c93fd795406af0825f1813759a2d127f695b90 not found: ID does not exist" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.193493 4919 scope.go:117] "RemoveContainer" containerID="eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.194699 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68\": container with ID starting with eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68 not found: ID does not exist" containerID="eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.194729 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68"} err="failed to get container status \"eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68\": rpc error: code = NotFound desc = could not find container \"eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68\": container with ID starting with eb51259beedb45deb5ca0242a533d41756213c64df04e453bf556b670c3c7c68 not found: ID does not exist" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.272607 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-sg-core-conf-yaml\") pod \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.272680 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-combined-ca-bundle\") pod \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.272724 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-config-data\") pod \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.272784 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl5jl\" (UniqueName: \"kubernetes.io/projected/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-kube-api-access-bl5jl\") pod \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.272853 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-log-httpd\") pod \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.273012 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-scripts\") pod \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.273551 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ca85ff2c-1d91-4e4b-9030-4bfda0c05206" (UID: "ca85ff2c-1d91-4e4b-9030-4bfda0c05206"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.273740 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-run-httpd\") pod \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\" (UID: \"ca85ff2c-1d91-4e4b-9030-4bfda0c05206\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.273994 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ca85ff2c-1d91-4e4b-9030-4bfda0c05206" (UID: "ca85ff2c-1d91-4e4b-9030-4bfda0c05206"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.274403 4919 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.274427 4919 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.282982 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-scripts" (OuterVolumeSpecName: "scripts") pod "ca85ff2c-1d91-4e4b-9030-4bfda0c05206" (UID: "ca85ff2c-1d91-4e4b-9030-4bfda0c05206"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.288887 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-kube-api-access-bl5jl" (OuterVolumeSpecName: "kube-api-access-bl5jl") pod "ca85ff2c-1d91-4e4b-9030-4bfda0c05206" (UID: "ca85ff2c-1d91-4e4b-9030-4bfda0c05206"). InnerVolumeSpecName "kube-api-access-bl5jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.333343 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ca85ff2c-1d91-4e4b-9030-4bfda0c05206" (UID: "ca85ff2c-1d91-4e4b-9030-4bfda0c05206"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.376166 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.376422 4919 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.376509 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl5jl\" (UniqueName: \"kubernetes.io/projected/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-kube-api-access-bl5jl\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.376600 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca85ff2c-1d91-4e4b-9030-4bfda0c05206" (UID: "ca85ff2c-1d91-4e4b-9030-4bfda0c05206"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.404934 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-config-data" (OuterVolumeSpecName: "config-data") pod "ca85ff2c-1d91-4e4b-9030-4bfda0c05206" (UID: "ca85ff2c-1d91-4e4b-9030-4bfda0c05206"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.482853 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.482890 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca85ff2c-1d91-4e4b-9030-4bfda0c05206-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.485045 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.506600 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.516474 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.548818 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549174 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" containerName="glance-httpd" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549193 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" containerName="glance-httpd" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549204 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce892da-35ae-4435-a61a-1ee629ddb17e" containerName="glance-log" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549230 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce892da-35ae-4435-a61a-1ee629ddb17e" containerName="glance-log" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549249 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" containerName="glance-log" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549258 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" containerName="glance-log" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549268 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce892da-35ae-4435-a61a-1ee629ddb17e" containerName="glance-httpd" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549274 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce892da-35ae-4435-a61a-1ee629ddb17e" containerName="glance-httpd" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549284 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea5378b8-a527-4f7f-b55a-48590aae7ff1" containerName="mariadb-database-create" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549290 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea5378b8-a527-4f7f-b55a-48590aae7ff1" containerName="mariadb-database-create" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549301 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="proxy-httpd" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549307 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="proxy-httpd" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549315 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f76563c-d515-4fdf-9011-6612ff2b5665" containerName="mariadb-database-create" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549321 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f76563c-d515-4fdf-9011-6612ff2b5665" containerName="mariadb-database-create" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549337 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0aa76ff-ed23-4978-8fe0-c0144d775a7a" containerName="mariadb-account-create-update" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549343 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0aa76ff-ed23-4978-8fe0-c0144d775a7a" containerName="mariadb-account-create-update" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549356 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5333c11-0798-492a-862a-d6c9076a5fe6" containerName="mariadb-account-create-update" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549361 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5333c11-0798-492a-862a-d6c9076a5fe6" containerName="mariadb-account-create-update" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549373 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="ceilometer-central-agent" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549380 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="ceilometer-central-agent" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549394 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d18dc7-6c07-4aab-b06f-91137d1809b0" containerName="mariadb-database-create" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549401 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d18dc7-6c07-4aab-b06f-91137d1809b0" containerName="mariadb-database-create" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549414 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="sg-core" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549421 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="sg-core" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549432 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba162e0-e000-4b80-8a7f-94699ad1c121" containerName="mariadb-account-create-update" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549439 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba162e0-e000-4b80-8a7f-94699ad1c121" containerName="mariadb-account-create-update" Jan 09 13:51:51 crc kubenswrapper[4919]: E0109 13:51:51.549452 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="ceilometer-notification-agent" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549459 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="ceilometer-notification-agent" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549651 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba162e0-e000-4b80-8a7f-94699ad1c121" containerName="mariadb-account-create-update" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549672 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea5378b8-a527-4f7f-b55a-48590aae7ff1" containerName="mariadb-database-create" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549685 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="sg-core" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549697 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="ceilometer-notification-agent" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549707 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" containerName="glance-httpd" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549716 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce892da-35ae-4435-a61a-1ee629ddb17e" containerName="glance-log" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549724 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0aa76ff-ed23-4978-8fe0-c0144d775a7a" containerName="mariadb-account-create-update" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549735 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="ceilometer-central-agent" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549744 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce892da-35ae-4435-a61a-1ee629ddb17e" containerName="glance-httpd" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549757 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5333c11-0798-492a-862a-d6c9076a5fe6" containerName="mariadb-account-create-update" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549772 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" containerName="glance-log" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549783 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d18dc7-6c07-4aab-b06f-91137d1809b0" containerName="mariadb-database-create" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549795 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" containerName="proxy-httpd" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.549806 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f76563c-d515-4fdf-9011-6612ff2b5665" containerName="mariadb-database-create" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.551066 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.557859 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.558093 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.570808 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.583685 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-logs\") pod \"fce892da-35ae-4435-a61a-1ee629ddb17e\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.584007 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-combined-ca-bundle\") pod \"fce892da-35ae-4435-a61a-1ee629ddb17e\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.584186 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-internal-tls-certs\") pod \"fce892da-35ae-4435-a61a-1ee629ddb17e\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.584412 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-logs" (OuterVolumeSpecName: "logs") pod "fce892da-35ae-4435-a61a-1ee629ddb17e" (UID: "fce892da-35ae-4435-a61a-1ee629ddb17e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.584702 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"fce892da-35ae-4435-a61a-1ee629ddb17e\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.584904 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-scripts\") pod \"fce892da-35ae-4435-a61a-1ee629ddb17e\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.585060 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b77qm\" (UniqueName: \"kubernetes.io/projected/fce892da-35ae-4435-a61a-1ee629ddb17e-kube-api-access-b77qm\") pod \"fce892da-35ae-4435-a61a-1ee629ddb17e\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.585230 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-httpd-run\") pod \"fce892da-35ae-4435-a61a-1ee629ddb17e\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.585388 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-config-data\") pod \"fce892da-35ae-4435-a61a-1ee629ddb17e\" (UID: \"fce892da-35ae-4435-a61a-1ee629ddb17e\") " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.586376 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.590678 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fce892da-35ae-4435-a61a-1ee629ddb17e" (UID: "fce892da-35ae-4435-a61a-1ee629ddb17e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.593870 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-scripts" (OuterVolumeSpecName: "scripts") pod "fce892da-35ae-4435-a61a-1ee629ddb17e" (UID: "fce892da-35ae-4435-a61a-1ee629ddb17e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.603420 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce892da-35ae-4435-a61a-1ee629ddb17e-kube-api-access-b77qm" (OuterVolumeSpecName: "kube-api-access-b77qm") pod "fce892da-35ae-4435-a61a-1ee629ddb17e" (UID: "fce892da-35ae-4435-a61a-1ee629ddb17e"). InnerVolumeSpecName "kube-api-access-b77qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.623331 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fce892da-35ae-4435-a61a-1ee629ddb17e" (UID: "fce892da-35ae-4435-a61a-1ee629ddb17e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.627527 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "fce892da-35ae-4435-a61a-1ee629ddb17e" (UID: "fce892da-35ae-4435-a61a-1ee629ddb17e"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.653795 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fce892da-35ae-4435-a61a-1ee629ddb17e" (UID: "fce892da-35ae-4435-a61a-1ee629ddb17e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.660442 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-config-data" (OuterVolumeSpecName: "config-data") pod "fce892da-35ae-4435-a61a-1ee629ddb17e" (UID: "fce892da-35ae-4435-a61a-1ee629ddb17e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688061 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-config-data\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688131 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688159 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-logs\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688248 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg8sb\" (UniqueName: \"kubernetes.io/projected/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-kube-api-access-pg8sb\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688354 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688500 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688595 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688616 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-scripts\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688735 4919 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688823 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688879 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b77qm\" (UniqueName: \"kubernetes.io/projected/fce892da-35ae-4435-a61a-1ee629ddb17e-kube-api-access-b77qm\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688904 4919 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fce892da-35ae-4435-a61a-1ee629ddb17e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688921 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688936 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.688946 4919 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fce892da-35ae-4435-a61a-1ee629ddb17e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.709418 4919 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.791093 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-config-data\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.791181 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.791238 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-logs\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.791266 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg8sb\" (UniqueName: \"kubernetes.io/projected/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-kube-api-access-pg8sb\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.791298 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.791358 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.791417 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.791446 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-scripts\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.791524 4919 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.796012 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-scripts\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.796618 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.798533 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.798678 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-logs\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.801233 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-config-data\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.802543 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.806939 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.821470 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg8sb\" (UniqueName: \"kubernetes.io/projected/58571fe0-89fb-41ed-a3eb-b04d6224dd1d-kube-api-access-pg8sb\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.845634 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"58571fe0-89fb-41ed-a3eb-b04d6224dd1d\") " pod="openstack/glance-default-external-api-0" Jan 09 13:51:51 crc kubenswrapper[4919]: I0109 13:51:51.880565 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.031089 4919 generic.go:334] "Generic (PLEG): container finished" podID="fce892da-35ae-4435-a61a-1ee629ddb17e" containerID="85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a" exitCode=0 Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.031181 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce892da-35ae-4435-a61a-1ee629ddb17e","Type":"ContainerDied","Data":"85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a"} Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.031238 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fce892da-35ae-4435-a61a-1ee629ddb17e","Type":"ContainerDied","Data":"96625b7a7e4723f3f48f07ea5b9479f12d3635edcc195c0dc153596f4276cf81"} Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.031188 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.031260 4919 scope.go:117] "RemoveContainer" containerID="85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.040162 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca85ff2c-1d91-4e4b-9030-4bfda0c05206","Type":"ContainerDied","Data":"47a943e4f1ec50195eb7cc1c5abfee4ba8b03ac98fa046ec3e6de1258879717c"} Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.040324 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.088603 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.098812 4919 scope.go:117] "RemoveContainer" containerID="fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.120088 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.135432 4919 scope.go:117] "RemoveContainer" containerID="85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a" Jan 09 13:51:52 crc kubenswrapper[4919]: E0109 13:51:52.138488 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a\": container with ID starting with 85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a not found: ID does not exist" containerID="85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.138546 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a"} err="failed to get container status \"85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a\": rpc error: code = NotFound desc = could not find container \"85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a\": container with ID starting with 85f656d977b0742903f11a97fd156d9f202368df9b13502fac1357b55b6a390a not found: ID does not exist" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.138581 4919 scope.go:117] "RemoveContainer" containerID="fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0" Jan 09 13:51:52 crc kubenswrapper[4919]: E0109 13:51:52.138974 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0\": container with ID starting with fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0 not found: ID does not exist" containerID="fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.139003 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0"} err="failed to get container status \"fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0\": rpc error: code = NotFound desc = could not find container \"fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0\": container with ID starting with fe8c4fc7fb1fad73e1ffb857ad738222ef55320b96c9df8a004ade44ddebb4b0 not found: ID does not exist" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.139022 4919 scope.go:117] "RemoveContainer" containerID="5424dac0999699ce39a2afd0dae7100f45f7b68759e885e6081d8a2ad65b4859" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.145546 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.160991 4919 scope.go:117] "RemoveContainer" containerID="2e2ffb17d6e90152a71c772aada2d5fdce41de196767028fa5b710de99048775" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.164366 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.182891 4919 scope.go:117] "RemoveContainer" containerID="fe632bc6eb7848f1f2114fbcaac7e7633be6abc89038650c2da027e7846e8600" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.205291 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.207360 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.210484 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.212344 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.222031 4919 scope.go:117] "RemoveContainer" containerID="d144e2f6aeac25655f18ee8db70b66149f117887c556890899bc6a84232b3289" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.223655 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.253655 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.256681 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.259261 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.264790 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.269778 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302110 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-scripts\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302170 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-config-data\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302202 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm8tm\" (UniqueName: \"kubernetes.io/projected/735040be-a013-45ef-a590-2819585ea47c-kube-api-access-gm8tm\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302258 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v62hk\" (UniqueName: \"kubernetes.io/projected/06e40248-357e-4534-a244-dad9a65b0fe7-kube-api-access-v62hk\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302334 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302402 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302448 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/735040be-a013-45ef-a590-2819585ea47c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302497 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302528 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/735040be-a013-45ef-a590-2819585ea47c-logs\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302572 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302593 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302614 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302641 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302664 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.302683 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404022 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404073 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/735040be-a013-45ef-a590-2819585ea47c-logs\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404120 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404146 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404168 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404185 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404218 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404235 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404256 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-scripts\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404275 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-config-data\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404294 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm8tm\" (UniqueName: \"kubernetes.io/projected/735040be-a013-45ef-a590-2819585ea47c-kube-api-access-gm8tm\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404320 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v62hk\" (UniqueName: \"kubernetes.io/projected/06e40248-357e-4534-a244-dad9a65b0fe7-kube-api-access-v62hk\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404359 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404376 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404402 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/735040be-a013-45ef-a590-2819585ea47c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.404846 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/735040be-a013-45ef-a590-2819585ea47c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.405169 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.411145 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.411594 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.411758 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/735040be-a013-45ef-a590-2819585ea47c-logs\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.413814 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.416005 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-scripts\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.418872 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.425867 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.427806 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.427867 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-config-data\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.431324 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.432242 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735040be-a013-45ef-a590-2819585ea47c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.434575 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm8tm\" (UniqueName: \"kubernetes.io/projected/735040be-a013-45ef-a590-2819585ea47c-kube-api-access-gm8tm\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.437058 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v62hk\" (UniqueName: \"kubernetes.io/projected/06e40248-357e-4534-a244-dad9a65b0fe7-kube-api-access-v62hk\") pod \"ceilometer-0\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.461918 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"735040be-a013-45ef-a590-2819585ea47c\") " pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: W0109 13:51:52.494737 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58571fe0_89fb_41ed_a3eb_b04d6224dd1d.slice/crio-0db27c46a501c5d8b5148294271ae9921db4fbd4fc74a443e041fa898e1d842a WatchSource:0}: Error finding container 0db27c46a501c5d8b5148294271ae9921db4fbd4fc74a443e041fa898e1d842a: Status 404 returned error can't find the container with id 0db27c46a501c5d8b5148294271ae9921db4fbd4fc74a443e041fa898e1d842a Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.495579 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.532593 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.573954 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.784257 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d3d016b-608b-4a81-aeae-7b1e4c75d893" path="/var/lib/kubelet/pods/0d3d016b-608b-4a81-aeae-7b1e4c75d893/volumes" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.785075 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca85ff2c-1d91-4e4b-9030-4bfda0c05206" path="/var/lib/kubelet/pods/ca85ff2c-1d91-4e4b-9030-4bfda0c05206/volumes" Jan 09 13:51:52 crc kubenswrapper[4919]: I0109 13:51:52.786686 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce892da-35ae-4435-a61a-1ee629ddb17e" path="/var/lib/kubelet/pods/fce892da-35ae-4435-a61a-1ee629ddb17e/volumes" Jan 09 13:51:53 crc kubenswrapper[4919]: I0109 13:51:53.063172 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"58571fe0-89fb-41ed-a3eb-b04d6224dd1d","Type":"ContainerStarted","Data":"0db27c46a501c5d8b5148294271ae9921db4fbd4fc74a443e041fa898e1d842a"} Jan 09 13:51:53 crc kubenswrapper[4919]: I0109 13:51:53.129073 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 13:51:53 crc kubenswrapper[4919]: I0109 13:51:53.152237 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.166116 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"58571fe0-89fb-41ed-a3eb-b04d6224dd1d","Type":"ContainerStarted","Data":"a0b8c451d821be59bab3d5906659ec01fe837c4eb58bae9416fea0d0990c50ba"} Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.167723 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"58571fe0-89fb-41ed-a3eb-b04d6224dd1d","Type":"ContainerStarted","Data":"ee3a834b22a702f633e536e4a6605e77678deea9548681e3a882892bca8b2fdd"} Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.190696 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"735040be-a013-45ef-a590-2819585ea47c","Type":"ContainerStarted","Data":"6a077ad327b527ce0badd672525513d757042078667d87206b5604ab3a5ce3f3"} Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.190744 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"735040be-a013-45ef-a590-2819585ea47c","Type":"ContainerStarted","Data":"864507e5704da3839fb37db01f52f8b987794bff0ada425639094d9a3ea3669f"} Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.195316 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06e40248-357e-4534-a244-dad9a65b0fe7","Type":"ContainerStarted","Data":"3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044"} Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.195350 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06e40248-357e-4534-a244-dad9a65b0fe7","Type":"ContainerStarted","Data":"6bd5cb4691d9f05c225babdd01fa226742f2d7406ade86a0c426eae4cc97e3bc"} Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.213436 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.213417145 podStartE2EDuration="3.213417145s" podCreationTimestamp="2026-01-09 13:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:54.212353608 +0000 UTC m=+1293.760193058" watchObservedRunningTime="2026-01-09 13:51:54.213417145 +0000 UTC m=+1293.761256585" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.301532 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rb9m6"] Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.303299 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.313917 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.314106 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tp6xs" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.314293 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.317151 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rb9m6"] Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.351921 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-scripts\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.351970 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgmt\" (UniqueName: \"kubernetes.io/projected/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-kube-api-access-ttgmt\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.351998 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.352055 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-config-data\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.453548 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-scripts\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.453877 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttgmt\" (UniqueName: \"kubernetes.io/projected/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-kube-api-access-ttgmt\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.454028 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.454099 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-config-data\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.461931 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-scripts\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.462040 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.462155 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-config-data\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.472748 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttgmt\" (UniqueName: \"kubernetes.io/projected/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-kube-api-access-ttgmt\") pod \"nova-cell0-conductor-db-sync-rb9m6\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:54 crc kubenswrapper[4919]: I0109 13:51:54.693510 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:51:55 crc kubenswrapper[4919]: I0109 13:51:55.160017 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rb9m6"] Jan 09 13:51:55 crc kubenswrapper[4919]: W0109 13:51:55.170331 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7dc0da4_2c09_43c9_bc81_ff75106ce3c7.slice/crio-a02c6ac4cb3264a15aa16269e50ab4d3b018122ccbc1994cfc30e2bb84290ab0 WatchSource:0}: Error finding container a02c6ac4cb3264a15aa16269e50ab4d3b018122ccbc1994cfc30e2bb84290ab0: Status 404 returned error can't find the container with id a02c6ac4cb3264a15aa16269e50ab4d3b018122ccbc1994cfc30e2bb84290ab0 Jan 09 13:51:55 crc kubenswrapper[4919]: I0109 13:51:55.205671 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06e40248-357e-4534-a244-dad9a65b0fe7","Type":"ContainerStarted","Data":"9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8"} Jan 09 13:51:55 crc kubenswrapper[4919]: I0109 13:51:55.206859 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rb9m6" event={"ID":"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7","Type":"ContainerStarted","Data":"a02c6ac4cb3264a15aa16269e50ab4d3b018122ccbc1994cfc30e2bb84290ab0"} Jan 09 13:51:55 crc kubenswrapper[4919]: I0109 13:51:55.210333 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"735040be-a013-45ef-a590-2819585ea47c","Type":"ContainerStarted","Data":"1df225f14f15f511ca570d94b658561690118c0c9c428a2397c005c5e700e60e"} Jan 09 13:51:55 crc kubenswrapper[4919]: I0109 13:51:55.230116 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.230096303 podStartE2EDuration="3.230096303s" podCreationTimestamp="2026-01-09 13:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:51:55.22794559 +0000 UTC m=+1294.775785040" watchObservedRunningTime="2026-01-09 13:51:55.230096303 +0000 UTC m=+1294.777935753" Jan 09 13:51:56 crc kubenswrapper[4919]: I0109 13:51:56.228432 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06e40248-357e-4534-a244-dad9a65b0fe7","Type":"ContainerStarted","Data":"d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb"} Jan 09 13:52:01 crc kubenswrapper[4919]: I0109 13:52:01.881798 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 13:52:01 crc kubenswrapper[4919]: I0109 13:52:01.882368 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 13:52:01 crc kubenswrapper[4919]: I0109 13:52:01.922842 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 13:52:01 crc kubenswrapper[4919]: I0109 13:52:01.939275 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 13:52:02 crc kubenswrapper[4919]: I0109 13:52:02.294959 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 13:52:02 crc kubenswrapper[4919]: I0109 13:52:02.295020 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 13:52:02 crc kubenswrapper[4919]: I0109 13:52:02.297123 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:02 crc kubenswrapper[4919]: I0109 13:52:02.532803 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 13:52:02 crc kubenswrapper[4919]: I0109 13:52:02.532849 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 13:52:02 crc kubenswrapper[4919]: I0109 13:52:02.591157 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 13:52:02 crc kubenswrapper[4919]: I0109 13:52:02.604731 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 13:52:03 crc kubenswrapper[4919]: I0109 13:52:03.301682 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 13:52:03 crc kubenswrapper[4919]: I0109 13:52:03.302969 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.311669 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="ceilometer-central-agent" containerID="cri-o://3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044" gracePeriod=30 Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.311497 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06e40248-357e-4534-a244-dad9a65b0fe7","Type":"ContainerStarted","Data":"adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1"} Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.312470 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.311750 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="proxy-httpd" containerID="cri-o://adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1" gracePeriod=30 Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.311737 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="sg-core" containerID="cri-o://d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb" gracePeriod=30 Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.311799 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="ceilometer-notification-agent" containerID="cri-o://9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8" gracePeriod=30 Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.313106 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rb9m6" event={"ID":"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7","Type":"ContainerStarted","Data":"6711e5e1fab242feb1d2916c90800292edf2c3e009f05f085197bce499324ec0"} Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.313133 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.313146 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.343812 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.326952725 podStartE2EDuration="12.343793189s" podCreationTimestamp="2026-01-09 13:51:52 +0000 UTC" firstStartedPulling="2026-01-09 13:51:53.153344747 +0000 UTC m=+1292.701184197" lastFinishedPulling="2026-01-09 13:52:03.170185211 +0000 UTC m=+1302.718024661" observedRunningTime="2026-01-09 13:52:04.3370202 +0000 UTC m=+1303.884859650" watchObservedRunningTime="2026-01-09 13:52:04.343793189 +0000 UTC m=+1303.891632629" Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.365757 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rb9m6" podStartSLOduration=1.875349938 podStartE2EDuration="10.365735015s" podCreationTimestamp="2026-01-09 13:51:54 +0000 UTC" firstStartedPulling="2026-01-09 13:51:55.172269353 +0000 UTC m=+1294.720108803" lastFinishedPulling="2026-01-09 13:52:03.66265443 +0000 UTC m=+1303.210493880" observedRunningTime="2026-01-09 13:52:04.35467901 +0000 UTC m=+1303.902518470" watchObservedRunningTime="2026-01-09 13:52:04.365735015 +0000 UTC m=+1303.913574465" Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.520756 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 13:52:04 crc kubenswrapper[4919]: I0109 13:52:04.524582 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.330176 4919 generic.go:334] "Generic (PLEG): container finished" podID="06e40248-357e-4534-a244-dad9a65b0fe7" containerID="adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1" exitCode=0 Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.330505 4919 generic.go:334] "Generic (PLEG): container finished" podID="06e40248-357e-4534-a244-dad9a65b0fe7" containerID="d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb" exitCode=2 Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.330515 4919 generic.go:334] "Generic (PLEG): container finished" podID="06e40248-357e-4534-a244-dad9a65b0fe7" containerID="3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044" exitCode=0 Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.330250 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06e40248-357e-4534-a244-dad9a65b0fe7","Type":"ContainerDied","Data":"adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1"} Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.330616 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.330626 4919 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.330671 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06e40248-357e-4534-a244-dad9a65b0fe7","Type":"ContainerDied","Data":"d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb"} Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.330715 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06e40248-357e-4534-a244-dad9a65b0fe7","Type":"ContainerDied","Data":"3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044"} Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.637482 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.685743 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.753512 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.831676 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-scripts\") pod \"06e40248-357e-4534-a244-dad9a65b0fe7\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.831796 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-sg-core-conf-yaml\") pod \"06e40248-357e-4534-a244-dad9a65b0fe7\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.831842 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-run-httpd\") pod \"06e40248-357e-4534-a244-dad9a65b0fe7\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.831909 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-config-data\") pod \"06e40248-357e-4534-a244-dad9a65b0fe7\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.831938 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-log-httpd\") pod \"06e40248-357e-4534-a244-dad9a65b0fe7\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.831999 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-combined-ca-bundle\") pod \"06e40248-357e-4534-a244-dad9a65b0fe7\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.832120 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v62hk\" (UniqueName: \"kubernetes.io/projected/06e40248-357e-4534-a244-dad9a65b0fe7-kube-api-access-v62hk\") pod \"06e40248-357e-4534-a244-dad9a65b0fe7\" (UID: \"06e40248-357e-4534-a244-dad9a65b0fe7\") " Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.832390 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "06e40248-357e-4534-a244-dad9a65b0fe7" (UID: "06e40248-357e-4534-a244-dad9a65b0fe7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.832679 4919 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.832815 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "06e40248-357e-4534-a244-dad9a65b0fe7" (UID: "06e40248-357e-4534-a244-dad9a65b0fe7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.839478 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-scripts" (OuterVolumeSpecName: "scripts") pod "06e40248-357e-4534-a244-dad9a65b0fe7" (UID: "06e40248-357e-4534-a244-dad9a65b0fe7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.840455 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e40248-357e-4534-a244-dad9a65b0fe7-kube-api-access-v62hk" (OuterVolumeSpecName: "kube-api-access-v62hk") pod "06e40248-357e-4534-a244-dad9a65b0fe7" (UID: "06e40248-357e-4534-a244-dad9a65b0fe7"). InnerVolumeSpecName "kube-api-access-v62hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.879973 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "06e40248-357e-4534-a244-dad9a65b0fe7" (UID: "06e40248-357e-4534-a244-dad9a65b0fe7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.934910 4919 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06e40248-357e-4534-a244-dad9a65b0fe7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.935232 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v62hk\" (UniqueName: \"kubernetes.io/projected/06e40248-357e-4534-a244-dad9a65b0fe7-kube-api-access-v62hk\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.935247 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.935338 4919 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.947440 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-config-data" (OuterVolumeSpecName: "config-data") pod "06e40248-357e-4534-a244-dad9a65b0fe7" (UID: "06e40248-357e-4534-a244-dad9a65b0fe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:05 crc kubenswrapper[4919]: I0109 13:52:05.962978 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06e40248-357e-4534-a244-dad9a65b0fe7" (UID: "06e40248-357e-4534-a244-dad9a65b0fe7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.038393 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.038427 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06e40248-357e-4534-a244-dad9a65b0fe7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.346511 4919 generic.go:334] "Generic (PLEG): container finished" podID="06e40248-357e-4534-a244-dad9a65b0fe7" containerID="9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8" exitCode=0 Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.347364 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.348914 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06e40248-357e-4534-a244-dad9a65b0fe7","Type":"ContainerDied","Data":"9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8"} Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.348977 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06e40248-357e-4534-a244-dad9a65b0fe7","Type":"ContainerDied","Data":"6bd5cb4691d9f05c225babdd01fa226742f2d7406ade86a0c426eae4cc97e3bc"} Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.349003 4919 scope.go:117] "RemoveContainer" containerID="adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.374388 4919 scope.go:117] "RemoveContainer" containerID="d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.394786 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.407104 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.425625 4919 scope.go:117] "RemoveContainer" containerID="9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.433516 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:06 crc kubenswrapper[4919]: E0109 13:52:06.434015 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="sg-core" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.434037 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="sg-core" Jan 09 13:52:06 crc kubenswrapper[4919]: E0109 13:52:06.434056 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="ceilometer-central-agent" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.434067 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="ceilometer-central-agent" Jan 09 13:52:06 crc kubenswrapper[4919]: E0109 13:52:06.434085 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="ceilometer-notification-agent" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.434094 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="ceilometer-notification-agent" Jan 09 13:52:06 crc kubenswrapper[4919]: E0109 13:52:06.434129 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="proxy-httpd" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.434139 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="proxy-httpd" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.434377 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="ceilometer-central-agent" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.434396 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="ceilometer-notification-agent" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.434415 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="proxy-httpd" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.434430 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" containerName="sg-core" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.436514 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.439695 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.439989 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.483514 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.491875 4919 scope.go:117] "RemoveContainer" containerID="3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.518959 4919 scope.go:117] "RemoveContainer" containerID="adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1" Jan 09 13:52:06 crc kubenswrapper[4919]: E0109 13:52:06.533470 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1\": container with ID starting with adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1 not found: ID does not exist" containerID="adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.533527 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1"} err="failed to get container status \"adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1\": rpc error: code = NotFound desc = could not find container \"adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1\": container with ID starting with adec667d599841c7654de621609b299404395576f7f8cdb2c5613e3a17bef0a1 not found: ID does not exist" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.533554 4919 scope.go:117] "RemoveContainer" containerID="d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb" Jan 09 13:52:06 crc kubenswrapper[4919]: E0109 13:52:06.534343 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb\": container with ID starting with d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb not found: ID does not exist" containerID="d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.534391 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb"} err="failed to get container status \"d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb\": rpc error: code = NotFound desc = could not find container \"d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb\": container with ID starting with d59a238dba47eb795bd37ba033ce2a4a1da05bc54df79ea2a6d2b174013da6bb not found: ID does not exist" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.534419 4919 scope.go:117] "RemoveContainer" containerID="9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8" Jan 09 13:52:06 crc kubenswrapper[4919]: E0109 13:52:06.537993 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8\": container with ID starting with 9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8 not found: ID does not exist" containerID="9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.538059 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8"} err="failed to get container status \"9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8\": rpc error: code = NotFound desc = could not find container \"9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8\": container with ID starting with 9eb3deec628cee38cd38b36447e1e9e2b245e6bfeed3d3ef34cb5cb5029db3b8 not found: ID does not exist" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.538106 4919 scope.go:117] "RemoveContainer" containerID="3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044" Jan 09 13:52:06 crc kubenswrapper[4919]: E0109 13:52:06.539862 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044\": container with ID starting with 3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044 not found: ID does not exist" containerID="3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.539885 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044"} err="failed to get container status \"3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044\": rpc error: code = NotFound desc = could not find container \"3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044\": container with ID starting with 3d2b135503c55001141535804a32a150cb63c734c717d39f62d67086091d5044 not found: ID does not exist" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.651318 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.651385 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqsqg\" (UniqueName: \"kubernetes.io/projected/01076e6d-3d6d-41d3-ba92-c367f1540745-kube-api-access-sqsqg\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.651470 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-run-httpd\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.651506 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.651642 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-scripts\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.651689 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-log-httpd\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.651743 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-config-data\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.753323 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqsqg\" (UniqueName: \"kubernetes.io/projected/01076e6d-3d6d-41d3-ba92-c367f1540745-kube-api-access-sqsqg\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.753397 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-run-httpd\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.753427 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.753494 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-scripts\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.753524 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-log-httpd\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.753545 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-config-data\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.753605 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.754818 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-run-httpd\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.754947 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-log-httpd\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.760121 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-config-data\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.760841 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-scripts\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.763064 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.765003 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e40248-357e-4534-a244-dad9a65b0fe7" path="/var/lib/kubelet/pods/06e40248-357e-4534-a244-dad9a65b0fe7/volumes" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.772099 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.773228 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqsqg\" (UniqueName: \"kubernetes.io/projected/01076e6d-3d6d-41d3-ba92-c367f1540745-kube-api-access-sqsqg\") pod \"ceilometer-0\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " pod="openstack/ceilometer-0" Jan 09 13:52:06 crc kubenswrapper[4919]: I0109 13:52:06.777752 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:52:08 crc kubenswrapper[4919]: W0109 13:52:08.155668 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01076e6d_3d6d_41d3_ba92_c367f1540745.slice/crio-7f927a22cdaf2f9a28dfd891608a2b08fc365500adc4839444a222adca65708e WatchSource:0}: Error finding container 7f927a22cdaf2f9a28dfd891608a2b08fc365500adc4839444a222adca65708e: Status 404 returned error can't find the container with id 7f927a22cdaf2f9a28dfd891608a2b08fc365500adc4839444a222adca65708e Jan 09 13:52:08 crc kubenswrapper[4919]: I0109 13:52:08.173162 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:08 crc kubenswrapper[4919]: I0109 13:52:08.394681 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01076e6d-3d6d-41d3-ba92-c367f1540745","Type":"ContainerStarted","Data":"7f927a22cdaf2f9a28dfd891608a2b08fc365500adc4839444a222adca65708e"} Jan 09 13:52:10 crc kubenswrapper[4919]: I0109 13:52:10.424066 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01076e6d-3d6d-41d3-ba92-c367f1540745","Type":"ContainerStarted","Data":"f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d"} Jan 09 13:52:12 crc kubenswrapper[4919]: I0109 13:52:12.440886 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01076e6d-3d6d-41d3-ba92-c367f1540745","Type":"ContainerStarted","Data":"d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e"} Jan 09 13:52:14 crc kubenswrapper[4919]: I0109 13:52:14.457726 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01076e6d-3d6d-41d3-ba92-c367f1540745","Type":"ContainerStarted","Data":"da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50"} Jan 09 13:52:16 crc kubenswrapper[4919]: I0109 13:52:16.488975 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01076e6d-3d6d-41d3-ba92-c367f1540745","Type":"ContainerStarted","Data":"bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848"} Jan 09 13:52:16 crc kubenswrapper[4919]: I0109 13:52:16.490456 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 13:52:16 crc kubenswrapper[4919]: I0109 13:52:16.541580 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.2153353510000002 podStartE2EDuration="10.541554595s" podCreationTimestamp="2026-01-09 13:52:06 +0000 UTC" firstStartedPulling="2026-01-09 13:52:08.158444921 +0000 UTC m=+1307.706284371" lastFinishedPulling="2026-01-09 13:52:15.484664165 +0000 UTC m=+1315.032503615" observedRunningTime="2026-01-09 13:52:16.512514832 +0000 UTC m=+1316.060354292" watchObservedRunningTime="2026-01-09 13:52:16.541554595 +0000 UTC m=+1316.089394045" Jan 09 13:52:24 crc kubenswrapper[4919]: I0109 13:52:24.561551 4919 generic.go:334] "Generic (PLEG): container finished" podID="b7dc0da4-2c09-43c9-bc81-ff75106ce3c7" containerID="6711e5e1fab242feb1d2916c90800292edf2c3e009f05f085197bce499324ec0" exitCode=0 Jan 09 13:52:24 crc kubenswrapper[4919]: I0109 13:52:24.561644 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rb9m6" event={"ID":"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7","Type":"ContainerDied","Data":"6711e5e1fab242feb1d2916c90800292edf2c3e009f05f085197bce499324ec0"} Jan 09 13:52:25 crc kubenswrapper[4919]: I0109 13:52:25.944797 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.045383 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-combined-ca-bundle\") pod \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.045730 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-scripts\") pod \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.045790 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-config-data\") pod \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.045843 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttgmt\" (UniqueName: \"kubernetes.io/projected/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-kube-api-access-ttgmt\") pod \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\" (UID: \"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7\") " Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.057076 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-scripts" (OuterVolumeSpecName: "scripts") pod "b7dc0da4-2c09-43c9-bc81-ff75106ce3c7" (UID: "b7dc0da4-2c09-43c9-bc81-ff75106ce3c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.057097 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-kube-api-access-ttgmt" (OuterVolumeSpecName: "kube-api-access-ttgmt") pod "b7dc0da4-2c09-43c9-bc81-ff75106ce3c7" (UID: "b7dc0da4-2c09-43c9-bc81-ff75106ce3c7"). InnerVolumeSpecName "kube-api-access-ttgmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.072876 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7dc0da4-2c09-43c9-bc81-ff75106ce3c7" (UID: "b7dc0da4-2c09-43c9-bc81-ff75106ce3c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.080114 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-config-data" (OuterVolumeSpecName: "config-data") pod "b7dc0da4-2c09-43c9-bc81-ff75106ce3c7" (UID: "b7dc0da4-2c09-43c9-bc81-ff75106ce3c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.148015 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.148045 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.148055 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.148063 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttgmt\" (UniqueName: \"kubernetes.io/projected/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7-kube-api-access-ttgmt\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.582054 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rb9m6" event={"ID":"b7dc0da4-2c09-43c9-bc81-ff75106ce3c7","Type":"ContainerDied","Data":"a02c6ac4cb3264a15aa16269e50ab4d3b018122ccbc1994cfc30e2bb84290ab0"} Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.582380 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a02c6ac4cb3264a15aa16269e50ab4d3b018122ccbc1994cfc30e2bb84290ab0" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.582105 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rb9m6" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.675039 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 13:52:26 crc kubenswrapper[4919]: E0109 13:52:26.675692 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7dc0da4-2c09-43c9-bc81-ff75106ce3c7" containerName="nova-cell0-conductor-db-sync" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.675713 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7dc0da4-2c09-43c9-bc81-ff75106ce3c7" containerName="nova-cell0-conductor-db-sync" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.675954 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7dc0da4-2c09-43c9-bc81-ff75106ce3c7" containerName="nova-cell0-conductor-db-sync" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.676798 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.678704 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.678836 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tp6xs" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.691258 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.861862 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b339d912-b884-4fd0-8b93-c21c2b6ce58c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b339d912-b884-4fd0-8b93-c21c2b6ce58c\") " pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.862104 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b339d912-b884-4fd0-8b93-c21c2b6ce58c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b339d912-b884-4fd0-8b93-c21c2b6ce58c\") " pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.862553 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbws9\" (UniqueName: \"kubernetes.io/projected/b339d912-b884-4fd0-8b93-c21c2b6ce58c-kube-api-access-dbws9\") pod \"nova-cell0-conductor-0\" (UID: \"b339d912-b884-4fd0-8b93-c21c2b6ce58c\") " pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.964789 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbws9\" (UniqueName: \"kubernetes.io/projected/b339d912-b884-4fd0-8b93-c21c2b6ce58c-kube-api-access-dbws9\") pod \"nova-cell0-conductor-0\" (UID: \"b339d912-b884-4fd0-8b93-c21c2b6ce58c\") " pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.964892 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b339d912-b884-4fd0-8b93-c21c2b6ce58c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b339d912-b884-4fd0-8b93-c21c2b6ce58c\") " pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.964972 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b339d912-b884-4fd0-8b93-c21c2b6ce58c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b339d912-b884-4fd0-8b93-c21c2b6ce58c\") " pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.969778 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b339d912-b884-4fd0-8b93-c21c2b6ce58c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b339d912-b884-4fd0-8b93-c21c2b6ce58c\") " pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.977488 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b339d912-b884-4fd0-8b93-c21c2b6ce58c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b339d912-b884-4fd0-8b93-c21c2b6ce58c\") " pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:26 crc kubenswrapper[4919]: I0109 13:52:26.989623 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbws9\" (UniqueName: \"kubernetes.io/projected/b339d912-b884-4fd0-8b93-c21c2b6ce58c-kube-api-access-dbws9\") pod \"nova-cell0-conductor-0\" (UID: \"b339d912-b884-4fd0-8b93-c21c2b6ce58c\") " pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:27 crc kubenswrapper[4919]: I0109 13:52:27.032391 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:27 crc kubenswrapper[4919]: I0109 13:52:27.550614 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 13:52:27 crc kubenswrapper[4919]: I0109 13:52:27.592084 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b339d912-b884-4fd0-8b93-c21c2b6ce58c","Type":"ContainerStarted","Data":"e2c521ff839555343031bc3198793714eca3d4d2e23e6430b4e39dd6946f3d29"} Jan 09 13:52:28 crc kubenswrapper[4919]: I0109 13:52:28.603301 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b339d912-b884-4fd0-8b93-c21c2b6ce58c","Type":"ContainerStarted","Data":"9ac25fb34fb16d72629196a7ae9b5080c92a06c087a436778bd5fe8a55f942d5"} Jan 09 13:52:28 crc kubenswrapper[4919]: I0109 13:52:28.603532 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:28 crc kubenswrapper[4919]: I0109 13:52:28.624588 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.624565183 podStartE2EDuration="2.624565183s" podCreationTimestamp="2026-01-09 13:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:52:28.617098957 +0000 UTC m=+1328.164938447" watchObservedRunningTime="2026-01-09 13:52:28.624565183 +0000 UTC m=+1328.172404633" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.065301 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.605866 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-jtq7r"] Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.607261 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.610163 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.610228 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.616667 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jtq7r"] Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.774246 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcc5r\" (UniqueName: \"kubernetes.io/projected/8c75bc15-846d-4551-9f91-8d16579b5e82-kube-api-access-xcc5r\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.774636 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-scripts\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.774711 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.774868 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-config-data\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.797264 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.803337 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.815559 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.820315 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.870282 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.871988 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.876540 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d743b55e-cd0c-4fae-9252-0b7fdba935cb-logs\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.876593 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-config-data\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.876625 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-config-data\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.876688 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcc5r\" (UniqueName: \"kubernetes.io/projected/8c75bc15-846d-4551-9f91-8d16579b5e82-kube-api-access-xcc5r\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.876739 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qdrm\" (UniqueName: \"kubernetes.io/projected/d743b55e-cd0c-4fae-9252-0b7fdba935cb-kube-api-access-2qdrm\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.876795 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-scripts\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.876949 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.877060 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.879969 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.883500 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.885843 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-config-data\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.887800 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.903174 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.917816 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-scripts\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.945138 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.958970 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 13:52:32 crc kubenswrapper[4919]: I0109 13:52:32.971356 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcc5r\" (UniqueName: \"kubernetes.io/projected/8c75bc15-846d-4551-9f91-8d16579b5e82-kube-api-access-xcc5r\") pod \"nova-cell0-cell-mapping-jtq7r\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.000050 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrgc4\" (UniqueName: \"kubernetes.io/projected/8cacdea9-e934-4647-b39b-073c88c9b5a8-kube-api-access-xrgc4\") pod \"nova-scheduler-0\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.003463 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-config-data\") pod \"nova-scheduler-0\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.010692 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.010898 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75df1d9d-7b8e-4f45-8bab-7840748eff4a-logs\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.011103 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jztpb\" (UniqueName: \"kubernetes.io/projected/75df1d9d-7b8e-4f45-8bab-7840748eff4a-kube-api-access-jztpb\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.011285 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.011372 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d743b55e-cd0c-4fae-9252-0b7fdba935cb-logs\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.011458 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-config-data\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.011551 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.011605 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.011660 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-config-data\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.011700 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qdrm\" (UniqueName: \"kubernetes.io/projected/d743b55e-cd0c-4fae-9252-0b7fdba935cb-kube-api-access-2qdrm\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.011922 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d743b55e-cd0c-4fae-9252-0b7fdba935cb-logs\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.029150 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.029522 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-config-data\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.057963 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qdrm\" (UniqueName: \"kubernetes.io/projected/d743b55e-cd0c-4fae-9252-0b7fdba935cb-kube-api-access-2qdrm\") pod \"nova-api-0\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " pod="openstack/nova-api-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.115348 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrgc4\" (UniqueName: \"kubernetes.io/projected/8cacdea9-e934-4647-b39b-073c88c9b5a8-kube-api-access-xrgc4\") pod \"nova-scheduler-0\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.115402 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-config-data\") pod \"nova-scheduler-0\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.115452 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75df1d9d-7b8e-4f45-8bab-7840748eff4a-logs\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.115486 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jztpb\" (UniqueName: \"kubernetes.io/projected/75df1d9d-7b8e-4f45-8bab-7840748eff4a-kube-api-access-jztpb\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.115547 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.115568 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.115589 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-config-data\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.117964 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75df1d9d-7b8e-4f45-8bab-7840748eff4a-logs\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.125837 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.154989 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.162072 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-config-data\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.165892 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-config-data\") pod \"nova-scheduler-0\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.175619 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrgc4\" (UniqueName: \"kubernetes.io/projected/8cacdea9-e934-4647-b39b-073c88c9b5a8-kube-api-access-xrgc4\") pod \"nova-scheduler-0\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.176866 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.179907 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-8qq6l"] Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.181799 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.189556 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jztpb\" (UniqueName: \"kubernetes.io/projected/75df1d9d-7b8e-4f45-8bab-7840748eff4a-kube-api-access-jztpb\") pod \"nova-metadata-0\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.242728 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.255285 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.263243 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.275642 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.279863 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.326188 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.326286 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.333340 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-config\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.334786 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.334827 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2tbb\" (UniqueName: \"kubernetes.io/projected/966e8c47-7429-4435-87ce-44cc8af93cea-kube-api-access-r2tbb\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.334906 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.347485 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-8qq6l"] Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.359853 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.428913 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.441393 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.441476 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.441497 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2tbb\" (UniqueName: \"kubernetes.io/projected/966e8c47-7429-4435-87ce-44cc8af93cea-kube-api-access-r2tbb\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.441531 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.441564 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.441585 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkxqh\" (UniqueName: \"kubernetes.io/projected/1594740a-2816-4a2d-81f0-d19d66a6a910-kube-api-access-xkxqh\") pod \"nova-cell1-novncproxy-0\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.441611 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.441631 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.441659 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-config\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.442921 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-config\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.443013 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.444694 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.445009 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.445654 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.481835 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2tbb\" (UniqueName: \"kubernetes.io/projected/966e8c47-7429-4435-87ce-44cc8af93cea-kube-api-access-r2tbb\") pod \"dnsmasq-dns-647df7b8c5-8qq6l\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.548585 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.548872 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkxqh\" (UniqueName: \"kubernetes.io/projected/1594740a-2816-4a2d-81f0-d19d66a6a910-kube-api-access-xkxqh\") pod \"nova-cell1-novncproxy-0\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.548944 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.556980 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.562149 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.584150 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkxqh\" (UniqueName: \"kubernetes.io/projected/1594740a-2816-4a2d-81f0-d19d66a6a910-kube-api-access-xkxqh\") pod \"nova-cell1-novncproxy-0\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.649011 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.654473 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:33 crc kubenswrapper[4919]: I0109 13:52:33.973135 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.032223 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jtq7r"] Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.092098 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.119411 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:52:34 crc kubenswrapper[4919]: W0109 13:52:34.123711 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cacdea9_e934_4647_b39b_073c88c9b5a8.slice/crio-a5f9cbc7a13f12bbe58b6cb5681b2b32647bd4d8a4e65d6aa088221ebf223fed WatchSource:0}: Error finding container a5f9cbc7a13f12bbe58b6cb5681b2b32647bd4d8a4e65d6aa088221ebf223fed: Status 404 returned error can't find the container with id a5f9cbc7a13f12bbe58b6cb5681b2b32647bd4d8a4e65d6aa088221ebf223fed Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.166255 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dflxc"] Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.167680 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.169947 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.171196 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.182544 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dflxc"] Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.274182 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.283618 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f28cs\" (UniqueName: \"kubernetes.io/projected/01ba833a-4cf7-4caf-8d94-efc794319d9a-kube-api-access-f28cs\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.283707 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-scripts\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.283750 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-config-data\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.283841 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: W0109 13:52:34.285052 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1594740a_2816_4a2d_81f0_d19d66a6a910.slice/crio-b89c890956a46540d8f4d6429f5a4a70d41c022038101629f99f6588039127b4 WatchSource:0}: Error finding container b89c890956a46540d8f4d6429f5a4a70d41c022038101629f99f6588039127b4: Status 404 returned error can't find the container with id b89c890956a46540d8f4d6429f5a4a70d41c022038101629f99f6588039127b4 Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.293880 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-8qq6l"] Jan 09 13:52:34 crc kubenswrapper[4919]: W0109 13:52:34.295591 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod966e8c47_7429_4435_87ce_44cc8af93cea.slice/crio-c25897274bcba55679fe1d8bb28cf4b848606ec47fd640ccbeb8ed7eefb48239 WatchSource:0}: Error finding container c25897274bcba55679fe1d8bb28cf4b848606ec47fd640ccbeb8ed7eefb48239: Status 404 returned error can't find the container with id c25897274bcba55679fe1d8bb28cf4b848606ec47fd640ccbeb8ed7eefb48239 Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.384603 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f28cs\" (UniqueName: \"kubernetes.io/projected/01ba833a-4cf7-4caf-8d94-efc794319d9a-kube-api-access-f28cs\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.384998 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-scripts\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.385068 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-config-data\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.385586 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.389197 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-scripts\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.390694 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.394799 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-config-data\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.409881 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f28cs\" (UniqueName: \"kubernetes.io/projected/01ba833a-4cf7-4caf-8d94-efc794319d9a-kube-api-access-f28cs\") pod \"nova-cell1-conductor-db-sync-dflxc\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.500929 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.741049 4919 generic.go:334] "Generic (PLEG): container finished" podID="966e8c47-7429-4435-87ce-44cc8af93cea" containerID="c5bbe5e8cc3b33e01382a61deac2b6e1e7eb9b6b458d0e098ba33f94d58dca51" exitCode=0 Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.741114 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" event={"ID":"966e8c47-7429-4435-87ce-44cc8af93cea","Type":"ContainerDied","Data":"c5bbe5e8cc3b33e01382a61deac2b6e1e7eb9b6b458d0e098ba33f94d58dca51"} Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.741144 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" event={"ID":"966e8c47-7429-4435-87ce-44cc8af93cea","Type":"ContainerStarted","Data":"c25897274bcba55679fe1d8bb28cf4b848606ec47fd640ccbeb8ed7eefb48239"} Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.806379 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cacdea9-e934-4647-b39b-073c88c9b5a8","Type":"ContainerStarted","Data":"a5f9cbc7a13f12bbe58b6cb5681b2b32647bd4d8a4e65d6aa088221ebf223fed"} Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.806752 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d743b55e-cd0c-4fae-9252-0b7fdba935cb","Type":"ContainerStarted","Data":"a2d1671b39299cbc5a1812203bc42177c5acb13d4dea04bc40fc920d1e419441"} Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.809863 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1594740a-2816-4a2d-81f0-d19d66a6a910","Type":"ContainerStarted","Data":"b89c890956a46540d8f4d6429f5a4a70d41c022038101629f99f6588039127b4"} Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.814378 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75df1d9d-7b8e-4f45-8bab-7840748eff4a","Type":"ContainerStarted","Data":"f6be2b8b427c48688f43c929875e549c4c0b137a76bc81ea89ee304ae87f68ef"} Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.815664 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jtq7r" event={"ID":"8c75bc15-846d-4551-9f91-8d16579b5e82","Type":"ContainerStarted","Data":"b9e683a0a8599be712538688894e252bc28eff4016b8ab99c536b7cb06635b68"} Jan 09 13:52:34 crc kubenswrapper[4919]: I0109 13:52:34.815689 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jtq7r" event={"ID":"8c75bc15-846d-4551-9f91-8d16579b5e82","Type":"ContainerStarted","Data":"dbdfd4105fcc15b5c04560f839f12b6805d251215a181bb91510303727a89716"} Jan 09 13:52:35 crc kubenswrapper[4919]: I0109 13:52:35.270388 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-jtq7r" podStartSLOduration=3.270363977 podStartE2EDuration="3.270363977s" podCreationTimestamp="2026-01-09 13:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:52:34.848496887 +0000 UTC m=+1334.396336337" watchObservedRunningTime="2026-01-09 13:52:35.270363977 +0000 UTC m=+1334.818203427" Jan 09 13:52:35 crc kubenswrapper[4919]: I0109 13:52:35.278798 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dflxc"] Jan 09 13:52:35 crc kubenswrapper[4919]: W0109 13:52:35.296957 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01ba833a_4cf7_4caf_8d94_efc794319d9a.slice/crio-b869cae772d6aa2f06ed12829426992045a5d08ce0e4dad3ec7b9845ffd3e9d9 WatchSource:0}: Error finding container b869cae772d6aa2f06ed12829426992045a5d08ce0e4dad3ec7b9845ffd3e9d9: Status 404 returned error can't find the container with id b869cae772d6aa2f06ed12829426992045a5d08ce0e4dad3ec7b9845ffd3e9d9 Jan 09 13:52:35 crc kubenswrapper[4919]: I0109 13:52:35.833797 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dflxc" event={"ID":"01ba833a-4cf7-4caf-8d94-efc794319d9a","Type":"ContainerStarted","Data":"8dbf80694e07f9443f4d3aaf46a5ebefda2d8a6831f24a6f4281ac7e7957ce35"} Jan 09 13:52:35 crc kubenswrapper[4919]: I0109 13:52:35.834022 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dflxc" event={"ID":"01ba833a-4cf7-4caf-8d94-efc794319d9a","Type":"ContainerStarted","Data":"b869cae772d6aa2f06ed12829426992045a5d08ce0e4dad3ec7b9845ffd3e9d9"} Jan 09 13:52:35 crc kubenswrapper[4919]: I0109 13:52:35.838321 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" event={"ID":"966e8c47-7429-4435-87ce-44cc8af93cea","Type":"ContainerStarted","Data":"b6cb1d44367919425eea2102b3654b31d3c246b70a01a2fee47786cb03607d8c"} Jan 09 13:52:35 crc kubenswrapper[4919]: I0109 13:52:35.838477 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:35 crc kubenswrapper[4919]: I0109 13:52:35.883951 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-dflxc" podStartSLOduration=1.883924012 podStartE2EDuration="1.883924012s" podCreationTimestamp="2026-01-09 13:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:52:35.870144839 +0000 UTC m=+1335.417984279" watchObservedRunningTime="2026-01-09 13:52:35.883924012 +0000 UTC m=+1335.431763462" Jan 09 13:52:36 crc kubenswrapper[4919]: I0109 13:52:36.793880 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 13:52:36 crc kubenswrapper[4919]: I0109 13:52:36.822813 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" podStartSLOduration=3.822790021 podStartE2EDuration="3.822790021s" podCreationTimestamp="2026-01-09 13:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:52:35.918492133 +0000 UTC m=+1335.466331583" watchObservedRunningTime="2026-01-09 13:52:36.822790021 +0000 UTC m=+1336.370629471" Jan 09 13:52:37 crc kubenswrapper[4919]: I0109 13:52:37.156552 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:37 crc kubenswrapper[4919]: I0109 13:52:37.237358 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.143882 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.144510 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6bf2dcbc-c28e-4fd3-81d7-f766225e964d" containerName="kube-state-metrics" containerID="cri-o://4d950ab99da10547fc7ebc3ce465f153a0328a4db84918dec53a7f4c50456878" gracePeriod=30 Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.893959 4919 generic.go:334] "Generic (PLEG): container finished" podID="6bf2dcbc-c28e-4fd3-81d7-f766225e964d" containerID="4d950ab99da10547fc7ebc3ce465f153a0328a4db84918dec53a7f4c50456878" exitCode=2 Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.894321 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6bf2dcbc-c28e-4fd3-81d7-f766225e964d","Type":"ContainerDied","Data":"4d950ab99da10547fc7ebc3ce465f153a0328a4db84918dec53a7f4c50456878"} Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.896824 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1594740a-2816-4a2d-81f0-d19d66a6a910","Type":"ContainerStarted","Data":"f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd"} Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.896981 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1594740a-2816-4a2d-81f0-d19d66a6a910" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd" gracePeriod=30 Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.899831 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75df1d9d-7b8e-4f45-8bab-7840748eff4a","Type":"ContainerStarted","Data":"f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee"} Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.899881 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75df1d9d-7b8e-4f45-8bab-7840748eff4a","Type":"ContainerStarted","Data":"92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019"} Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.900098 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" containerName="nova-metadata-metadata" containerID="cri-o://f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee" gracePeriod=30 Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.900245 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" containerName="nova-metadata-log" containerID="cri-o://92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019" gracePeriod=30 Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.926292 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cacdea9-e934-4647-b39b-073c88c9b5a8","Type":"ContainerStarted","Data":"b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8"} Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.937677 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d743b55e-cd0c-4fae-9252-0b7fdba935cb","Type":"ContainerStarted","Data":"83535fbcb7a353ea1e88e3fa91e6d756065b846945a82605c30f9f9932b85a6f"} Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.937731 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d743b55e-cd0c-4fae-9252-0b7fdba935cb","Type":"ContainerStarted","Data":"7b34bbb55691307412bc04bc16931f1224f715d822c0f0e3034ebac9ac8238f1"} Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.974128 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.7306303229999997 podStartE2EDuration="8.974098893s" podCreationTimestamp="2026-01-09 13:52:33 +0000 UTC" firstStartedPulling="2026-01-09 13:52:34.288233989 +0000 UTC m=+1333.836073449" lastFinishedPulling="2026-01-09 13:52:40.531702569 +0000 UTC m=+1340.079542019" observedRunningTime="2026-01-09 13:52:41.956988467 +0000 UTC m=+1341.504827927" watchObservedRunningTime="2026-01-09 13:52:41.974098893 +0000 UTC m=+1341.521938343" Jan 09 13:52:41 crc kubenswrapper[4919]: I0109 13:52:41.998766 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.622509489 podStartE2EDuration="9.998738667s" podCreationTimestamp="2026-01-09 13:52:32 +0000 UTC" firstStartedPulling="2026-01-09 13:52:34.153659976 +0000 UTC m=+1333.701499426" lastFinishedPulling="2026-01-09 13:52:40.529889144 +0000 UTC m=+1340.077728604" observedRunningTime="2026-01-09 13:52:41.988447071 +0000 UTC m=+1341.536286521" watchObservedRunningTime="2026-01-09 13:52:41.998738667 +0000 UTC m=+1341.546578117" Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.045723 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.668125666 podStartE2EDuration="10.045697997s" podCreationTimestamp="2026-01-09 13:52:32 +0000 UTC" firstStartedPulling="2026-01-09 13:52:34.153987845 +0000 UTC m=+1333.701827285" lastFinishedPulling="2026-01-09 13:52:40.531560166 +0000 UTC m=+1340.079399616" observedRunningTime="2026-01-09 13:52:42.042093677 +0000 UTC m=+1341.589933137" watchObservedRunningTime="2026-01-09 13:52:42.045697997 +0000 UTC m=+1341.593537457" Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.107899 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.555728316 podStartE2EDuration="10.107876326s" podCreationTimestamp="2026-01-09 13:52:32 +0000 UTC" firstStartedPulling="2026-01-09 13:52:33.982423401 +0000 UTC m=+1333.530262861" lastFinishedPulling="2026-01-09 13:52:40.534571421 +0000 UTC m=+1340.082410871" observedRunningTime="2026-01-09 13:52:42.089662783 +0000 UTC m=+1341.637502233" watchObservedRunningTime="2026-01-09 13:52:42.107876326 +0000 UTC m=+1341.655715776" Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.288952 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.398897 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzqgw\" (UniqueName: \"kubernetes.io/projected/6bf2dcbc-c28e-4fd3-81d7-f766225e964d-kube-api-access-jzqgw\") pod \"6bf2dcbc-c28e-4fd3-81d7-f766225e964d\" (UID: \"6bf2dcbc-c28e-4fd3-81d7-f766225e964d\") " Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.419585 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf2dcbc-c28e-4fd3-81d7-f766225e964d-kube-api-access-jzqgw" (OuterVolumeSpecName: "kube-api-access-jzqgw") pod "6bf2dcbc-c28e-4fd3-81d7-f766225e964d" (UID: "6bf2dcbc-c28e-4fd3-81d7-f766225e964d"). InnerVolumeSpecName "kube-api-access-jzqgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.502085 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzqgw\" (UniqueName: \"kubernetes.io/projected/6bf2dcbc-c28e-4fd3-81d7-f766225e964d-kube-api-access-jzqgw\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.892963 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.948943 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6bf2dcbc-c28e-4fd3-81d7-f766225e964d","Type":"ContainerDied","Data":"d21db00e71c00bffbd35d0e7c99cfcd481054d94e6c8ca4525a8543d191563c0"} Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.949003 4919 scope.go:117] "RemoveContainer" containerID="4d950ab99da10547fc7ebc3ce465f153a0328a4db84918dec53a7f4c50456878" Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.949133 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.953987 4919 generic.go:334] "Generic (PLEG): container finished" podID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" containerID="f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee" exitCode=0 Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.954016 4919 generic.go:334] "Generic (PLEG): container finished" podID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" containerID="92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019" exitCode=143 Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.954885 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.955053 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75df1d9d-7b8e-4f45-8bab-7840748eff4a","Type":"ContainerDied","Data":"f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee"} Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.955089 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75df1d9d-7b8e-4f45-8bab-7840748eff4a","Type":"ContainerDied","Data":"92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019"} Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.955104 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75df1d9d-7b8e-4f45-8bab-7840748eff4a","Type":"ContainerDied","Data":"f6be2b8b427c48688f43c929875e549c4c0b137a76bc81ea89ee304ae87f68ef"} Jan 09 13:52:42 crc kubenswrapper[4919]: I0109 13:52:42.997728 4919 scope.go:117] "RemoveContainer" containerID="f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.001579 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.010868 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-combined-ca-bundle\") pod \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.011014 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75df1d9d-7b8e-4f45-8bab-7840748eff4a-logs\") pod \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.011112 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jztpb\" (UniqueName: \"kubernetes.io/projected/75df1d9d-7b8e-4f45-8bab-7840748eff4a-kube-api-access-jztpb\") pod \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.011190 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-config-data\") pod \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\" (UID: \"75df1d9d-7b8e-4f45-8bab-7840748eff4a\") " Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.012999 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.013496 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75df1d9d-7b8e-4f45-8bab-7840748eff4a-logs" (OuterVolumeSpecName: "logs") pod "75df1d9d-7b8e-4f45-8bab-7840748eff4a" (UID: "75df1d9d-7b8e-4f45-8bab-7840748eff4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.016827 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75df1d9d-7b8e-4f45-8bab-7840748eff4a-kube-api-access-jztpb" (OuterVolumeSpecName: "kube-api-access-jztpb") pod "75df1d9d-7b8e-4f45-8bab-7840748eff4a" (UID: "75df1d9d-7b8e-4f45-8bab-7840748eff4a"). InnerVolumeSpecName "kube-api-access-jztpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.026835 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 13:52:43 crc kubenswrapper[4919]: E0109 13:52:43.027782 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf2dcbc-c28e-4fd3-81d7-f766225e964d" containerName="kube-state-metrics" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.027890 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf2dcbc-c28e-4fd3-81d7-f766225e964d" containerName="kube-state-metrics" Jan 09 13:52:43 crc kubenswrapper[4919]: E0109 13:52:43.027928 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" containerName="nova-metadata-metadata" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.027941 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" containerName="nova-metadata-metadata" Jan 09 13:52:43 crc kubenswrapper[4919]: E0109 13:52:43.027954 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" containerName="nova-metadata-log" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.027962 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" containerName="nova-metadata-log" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.028185 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf2dcbc-c28e-4fd3-81d7-f766225e964d" containerName="kube-state-metrics" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.028225 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" containerName="nova-metadata-metadata" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.028241 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" containerName="nova-metadata-log" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.029106 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.029933 4919 scope.go:117] "RemoveContainer" containerID="92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.030327 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.032721 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.032950 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.055604 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75df1d9d-7b8e-4f45-8bab-7840748eff4a" (UID: "75df1d9d-7b8e-4f45-8bab-7840748eff4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.060627 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-config-data" (OuterVolumeSpecName: "config-data") pod "75df1d9d-7b8e-4f45-8bab-7840748eff4a" (UID: "75df1d9d-7b8e-4f45-8bab-7840748eff4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.113688 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.113801 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.113878 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctff5\" (UniqueName: \"kubernetes.io/projected/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-kube-api-access-ctff5\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.113906 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.114035 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jztpb\" (UniqueName: \"kubernetes.io/projected/75df1d9d-7b8e-4f45-8bab-7840748eff4a-kube-api-access-jztpb\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.114062 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.114074 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75df1d9d-7b8e-4f45-8bab-7840748eff4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.114086 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75df1d9d-7b8e-4f45-8bab-7840748eff4a-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.127078 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.127610 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.153223 4919 scope.go:117] "RemoveContainer" containerID="f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee" Jan 09 13:52:43 crc kubenswrapper[4919]: E0109 13:52:43.153744 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee\": container with ID starting with f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee not found: ID does not exist" containerID="f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.153949 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee"} err="failed to get container status \"f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee\": rpc error: code = NotFound desc = could not find container \"f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee\": container with ID starting with f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee not found: ID does not exist" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.153989 4919 scope.go:117] "RemoveContainer" containerID="92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019" Jan 09 13:52:43 crc kubenswrapper[4919]: E0109 13:52:43.157176 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019\": container with ID starting with 92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019 not found: ID does not exist" containerID="92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.157234 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019"} err="failed to get container status \"92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019\": rpc error: code = NotFound desc = could not find container \"92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019\": container with ID starting with 92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019 not found: ID does not exist" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.157263 4919 scope.go:117] "RemoveContainer" containerID="f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.158227 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee"} err="failed to get container status \"f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee\": rpc error: code = NotFound desc = could not find container \"f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee\": container with ID starting with f0fc22cf5a44c03f0d484b7968c47340d4fa6251f32377390b4c8bb4a74c0fee not found: ID does not exist" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.158257 4919 scope.go:117] "RemoveContainer" containerID="92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.158548 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019"} err="failed to get container status \"92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019\": rpc error: code = NotFound desc = could not find container \"92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019\": container with ID starting with 92321d6d5c26b9ebcf1fbb3e5581bfee5fbcbd9adc8d1c4fff8f29d03b97c019 not found: ID does not exist" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.215567 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.215603 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctff5\" (UniqueName: \"kubernetes.io/projected/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-kube-api-access-ctff5\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.215630 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.215789 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.220810 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.220950 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.220995 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.236954 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctff5\" (UniqueName: \"kubernetes.io/projected/3e1aa728-2078-4e6c-b738-0bc97b1f14ff-kube-api-access-ctff5\") pod \"kube-state-metrics-0\" (UID: \"3e1aa728-2078-4e6c-b738-0bc97b1f14ff\") " pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.294281 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.307471 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.332368 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.334065 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.341570 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.341757 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.345989 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.423621 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.423686 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpv9m\" (UniqueName: \"kubernetes.io/projected/576e43fc-19df-4204-b3b1-1b829644cbf0-kube-api-access-zpv9m\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.423726 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-config-data\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.423747 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/576e43fc-19df-4204-b3b1-1b829644cbf0-logs\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.423774 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.431168 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.431393 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.452998 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.462164 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.526149 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.526399 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.526428 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpv9m\" (UniqueName: \"kubernetes.io/projected/576e43fc-19df-4204-b3b1-1b829644cbf0-kube-api-access-zpv9m\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.526482 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-config-data\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.526508 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/576e43fc-19df-4204-b3b1-1b829644cbf0-logs\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.528577 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/576e43fc-19df-4204-b3b1-1b829644cbf0-logs\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.530795 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.531367 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-config-data\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.534939 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.548408 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpv9m\" (UniqueName: \"kubernetes.io/projected/576e43fc-19df-4204-b3b1-1b829644cbf0-kube-api-access-zpv9m\") pod \"nova-metadata-0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.651232 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.651713 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.656448 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.746633 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-jtvzp"] Jan 09 13:52:43 crc kubenswrapper[4919]: I0109 13:52:43.746895 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" podUID="a226a0fa-ed83-40e1-933e-af4c16c363b2" containerName="dnsmasq-dns" containerID="cri-o://644aa489382e29879af00ec8bbe5d0a2e65f378ec2dc65596c6c480e91714ac7" gracePeriod=10 Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.139426 4919 generic.go:334] "Generic (PLEG): container finished" podID="a226a0fa-ed83-40e1-933e-af4c16c363b2" containerID="644aa489382e29879af00ec8bbe5d0a2e65f378ec2dc65596c6c480e91714ac7" exitCode=0 Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.139855 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" event={"ID":"a226a0fa-ed83-40e1-933e-af4c16c363b2","Type":"ContainerDied","Data":"644aa489382e29879af00ec8bbe5d0a2e65f378ec2dc65596c6c480e91714ac7"} Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.192665 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.220260 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.220780 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.258947 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 13:52:44 crc kubenswrapper[4919]: W0109 13:52:44.261843 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e1aa728_2078_4e6c_b738_0bc97b1f14ff.slice/crio-dfd78bcbd0a3958496af9dbe6b161cdaa2d8a28f0023d24380d86f4f2c81496d WatchSource:0}: Error finding container dfd78bcbd0a3958496af9dbe6b161cdaa2d8a28f0023d24380d86f4f2c81496d: Status 404 returned error can't find the container with id dfd78bcbd0a3958496af9dbe6b161cdaa2d8a28f0023d24380d86f4f2c81496d Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.342574 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.542611 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.571709 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdkkn\" (UniqueName: \"kubernetes.io/projected/a226a0fa-ed83-40e1-933e-af4c16c363b2-kube-api-access-wdkkn\") pod \"a226a0fa-ed83-40e1-933e-af4c16c363b2\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.571794 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-swift-storage-0\") pod \"a226a0fa-ed83-40e1-933e-af4c16c363b2\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.571894 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-svc\") pod \"a226a0fa-ed83-40e1-933e-af4c16c363b2\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.572052 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-sb\") pod \"a226a0fa-ed83-40e1-933e-af4c16c363b2\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.572101 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-config\") pod \"a226a0fa-ed83-40e1-933e-af4c16c363b2\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.572137 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-nb\") pod \"a226a0fa-ed83-40e1-933e-af4c16c363b2\" (UID: \"a226a0fa-ed83-40e1-933e-af4c16c363b2\") " Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.590542 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a226a0fa-ed83-40e1-933e-af4c16c363b2-kube-api-access-wdkkn" (OuterVolumeSpecName: "kube-api-access-wdkkn") pod "a226a0fa-ed83-40e1-933e-af4c16c363b2" (UID: "a226a0fa-ed83-40e1-933e-af4c16c363b2"). InnerVolumeSpecName "kube-api-access-wdkkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.677244 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdkkn\" (UniqueName: \"kubernetes.io/projected/a226a0fa-ed83-40e1-933e-af4c16c363b2-kube-api-access-wdkkn\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.770144 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a226a0fa-ed83-40e1-933e-af4c16c363b2" (UID: "a226a0fa-ed83-40e1-933e-af4c16c363b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.778907 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf2dcbc-c28e-4fd3-81d7-f766225e964d" path="/var/lib/kubelet/pods/6bf2dcbc-c28e-4fd3-81d7-f766225e964d/volumes" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.779428 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.779688 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75df1d9d-7b8e-4f45-8bab-7840748eff4a" path="/var/lib/kubelet/pods/75df1d9d-7b8e-4f45-8bab-7840748eff4a/volumes" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.786519 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a226a0fa-ed83-40e1-933e-af4c16c363b2" (UID: "a226a0fa-ed83-40e1-933e-af4c16c363b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.787823 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a226a0fa-ed83-40e1-933e-af4c16c363b2" (UID: "a226a0fa-ed83-40e1-933e-af4c16c363b2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.807540 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a226a0fa-ed83-40e1-933e-af4c16c363b2" (UID: "a226a0fa-ed83-40e1-933e-af4c16c363b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.841501 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-config" (OuterVolumeSpecName: "config") pod "a226a0fa-ed83-40e1-933e-af4c16c363b2" (UID: "a226a0fa-ed83-40e1-933e-af4c16c363b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.882706 4919 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.882746 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.882760 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.882772 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a226a0fa-ed83-40e1-933e-af4c16c363b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.963391 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.963934 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="proxy-httpd" containerID="cri-o://bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848" gracePeriod=30 Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.964252 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="sg-core" containerID="cri-o://da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50" gracePeriod=30 Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.964367 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="ceilometer-notification-agent" containerID="cri-o://d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e" gracePeriod=30 Jan 09 13:52:44 crc kubenswrapper[4919]: I0109 13:52:44.964409 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="ceilometer-central-agent" containerID="cri-o://f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d" gracePeriod=30 Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.181417 4919 generic.go:334] "Generic (PLEG): container finished" podID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerID="da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50" exitCode=2 Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.181501 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01076e6d-3d6d-41d3-ba92-c367f1540745","Type":"ContainerDied","Data":"da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50"} Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.194350 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"576e43fc-19df-4204-b3b1-1b829644cbf0","Type":"ContainerStarted","Data":"f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223"} Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.194404 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"576e43fc-19df-4204-b3b1-1b829644cbf0","Type":"ContainerStarted","Data":"b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a"} Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.194417 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"576e43fc-19df-4204-b3b1-1b829644cbf0","Type":"ContainerStarted","Data":"f11650efe25d1c26e3f6b761f6b2dd413d251ec96dc46a2f73ff7535c8b33588"} Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.199352 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e1aa728-2078-4e6c-b738-0bc97b1f14ff","Type":"ContainerStarted","Data":"dfd78bcbd0a3958496af9dbe6b161cdaa2d8a28f0023d24380d86f4f2c81496d"} Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.218873 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.219939 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" event={"ID":"a226a0fa-ed83-40e1-933e-af4c16c363b2","Type":"ContainerDied","Data":"4e042c2a1a5bb0c20a6afdb4d04f58cf0a76a6defae24013861591df4ff7f82f"} Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.220005 4919 scope.go:117] "RemoveContainer" containerID="644aa489382e29879af00ec8bbe5d0a2e65f378ec2dc65596c6c480e91714ac7" Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.244407 4919 scope.go:117] "RemoveContainer" containerID="d1d1a27bd8f462c61b17b0ad36f6ab28c0277aa69f5a2c1be1a558f291239aa2" Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.259200 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.259178103 podStartE2EDuration="2.259178103s" podCreationTimestamp="2026-01-09 13:52:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:52:45.22092885 +0000 UTC m=+1344.768768320" watchObservedRunningTime="2026-01-09 13:52:45.259178103 +0000 UTC m=+1344.807017553" Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.274276 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-jtvzp"] Jan 09 13:52:45 crc kubenswrapper[4919]: I0109 13:52:45.285284 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-jtvzp"] Jan 09 13:52:46 crc kubenswrapper[4919]: I0109 13:52:46.242761 4919 generic.go:334] "Generic (PLEG): container finished" podID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerID="bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848" exitCode=0 Jan 09 13:52:46 crc kubenswrapper[4919]: I0109 13:52:46.243095 4919 generic.go:334] "Generic (PLEG): container finished" podID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerID="f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d" exitCode=0 Jan 09 13:52:46 crc kubenswrapper[4919]: I0109 13:52:46.242920 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01076e6d-3d6d-41d3-ba92-c367f1540745","Type":"ContainerDied","Data":"bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848"} Jan 09 13:52:46 crc kubenswrapper[4919]: I0109 13:52:46.243162 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01076e6d-3d6d-41d3-ba92-c367f1540745","Type":"ContainerDied","Data":"f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d"} Jan 09 13:52:46 crc kubenswrapper[4919]: I0109 13:52:46.244755 4919 generic.go:334] "Generic (PLEG): container finished" podID="8c75bc15-846d-4551-9f91-8d16579b5e82" containerID="b9e683a0a8599be712538688894e252bc28eff4016b8ab99c536b7cb06635b68" exitCode=0 Jan 09 13:52:46 crc kubenswrapper[4919]: I0109 13:52:46.244826 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jtq7r" event={"ID":"8c75bc15-846d-4551-9f91-8d16579b5e82","Type":"ContainerDied","Data":"b9e683a0a8599be712538688894e252bc28eff4016b8ab99c536b7cb06635b68"} Jan 09 13:52:46 crc kubenswrapper[4919]: I0109 13:52:46.246406 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e1aa728-2078-4e6c-b738-0bc97b1f14ff","Type":"ContainerStarted","Data":"c6d7e1ca92b8f2cb64e3aa1b8b00ea9e9aa7feebb3fbf5594077ee3c2753db76"} Jan 09 13:52:46 crc kubenswrapper[4919]: I0109 13:52:46.246512 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 09 13:52:46 crc kubenswrapper[4919]: I0109 13:52:46.280162 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.53722185 podStartE2EDuration="4.280138038s" podCreationTimestamp="2026-01-09 13:52:42 +0000 UTC" firstStartedPulling="2026-01-09 13:52:44.267124048 +0000 UTC m=+1343.814963498" lastFinishedPulling="2026-01-09 13:52:45.010040236 +0000 UTC m=+1344.557879686" observedRunningTime="2026-01-09 13:52:46.274280252 +0000 UTC m=+1345.822119702" watchObservedRunningTime="2026-01-09 13:52:46.280138038 +0000 UTC m=+1345.827977498" Jan 09 13:52:46 crc kubenswrapper[4919]: I0109 13:52:46.792429 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a226a0fa-ed83-40e1-933e-af4c16c363b2" path="/var/lib/kubelet/pods/a226a0fa-ed83-40e1-933e-af4c16c363b2/volumes" Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.649361 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.742718 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-config-data\") pod \"8c75bc15-846d-4551-9f91-8d16579b5e82\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.742845 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-scripts\") pod \"8c75bc15-846d-4551-9f91-8d16579b5e82\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.742883 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-combined-ca-bundle\") pod \"8c75bc15-846d-4551-9f91-8d16579b5e82\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.742916 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcc5r\" (UniqueName: \"kubernetes.io/projected/8c75bc15-846d-4551-9f91-8d16579b5e82-kube-api-access-xcc5r\") pod \"8c75bc15-846d-4551-9f91-8d16579b5e82\" (UID: \"8c75bc15-846d-4551-9f91-8d16579b5e82\") " Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.749740 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c75bc15-846d-4551-9f91-8d16579b5e82-kube-api-access-xcc5r" (OuterVolumeSpecName: "kube-api-access-xcc5r") pod "8c75bc15-846d-4551-9f91-8d16579b5e82" (UID: "8c75bc15-846d-4551-9f91-8d16579b5e82"). InnerVolumeSpecName "kube-api-access-xcc5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.751160 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-scripts" (OuterVolumeSpecName: "scripts") pod "8c75bc15-846d-4551-9f91-8d16579b5e82" (UID: "8c75bc15-846d-4551-9f91-8d16579b5e82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.778641 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c75bc15-846d-4551-9f91-8d16579b5e82" (UID: "8c75bc15-846d-4551-9f91-8d16579b5e82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.790438 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-config-data" (OuterVolumeSpecName: "config-data") pod "8c75bc15-846d-4551-9f91-8d16579b5e82" (UID: "8c75bc15-846d-4551-9f91-8d16579b5e82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.845831 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.845867 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.845884 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcc5r\" (UniqueName: \"kubernetes.io/projected/8c75bc15-846d-4551-9f91-8d16579b5e82-kube-api-access-xcc5r\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:47 crc kubenswrapper[4919]: I0109 13:52:47.845894 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c75bc15-846d-4551-9f91-8d16579b5e82-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.266066 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jtq7r" Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.266430 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jtq7r" event={"ID":"8c75bc15-846d-4551-9f91-8d16579b5e82","Type":"ContainerDied","Data":"dbdfd4105fcc15b5c04560f839f12b6805d251215a181bb91510303727a89716"} Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.266471 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbdfd4105fcc15b5c04560f839f12b6805d251215a181bb91510303727a89716" Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.268136 4919 generic.go:334] "Generic (PLEG): container finished" podID="01ba833a-4cf7-4caf-8d94-efc794319d9a" containerID="8dbf80694e07f9443f4d3aaf46a5ebefda2d8a6831f24a6f4281ac7e7957ce35" exitCode=0 Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.268168 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dflxc" event={"ID":"01ba833a-4cf7-4caf-8d94-efc794319d9a","Type":"ContainerDied","Data":"8dbf80694e07f9443f4d3aaf46a5ebefda2d8a6831f24a6f4281ac7e7957ce35"} Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.435583 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.436241 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerName="nova-api-log" containerID="cri-o://7b34bbb55691307412bc04bc16931f1224f715d822c0f0e3034ebac9ac8238f1" gracePeriod=30 Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.436833 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerName="nova-api-api" containerID="cri-o://83535fbcb7a353ea1e88e3fa91e6d756065b846945a82605c30f9f9932b85a6f" gracePeriod=30 Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.461257 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.461464 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8cacdea9-e934-4647-b39b-073c88c9b5a8" containerName="nova-scheduler-scheduler" containerID="cri-o://b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8" gracePeriod=30 Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.486296 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.486565 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="576e43fc-19df-4204-b3b1-1b829644cbf0" containerName="nova-metadata-log" containerID="cri-o://b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a" gracePeriod=30 Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.487128 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="576e43fc-19df-4204-b3b1-1b829644cbf0" containerName="nova-metadata-metadata" containerID="cri-o://f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223" gracePeriod=30 Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.651854 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.651907 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.891295 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.967160 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-config-data\") pod \"01076e6d-3d6d-41d3-ba92-c367f1540745\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.968402 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqsqg\" (UniqueName: \"kubernetes.io/projected/01076e6d-3d6d-41d3-ba92-c367f1540745-kube-api-access-sqsqg\") pod \"01076e6d-3d6d-41d3-ba92-c367f1540745\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.968529 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-combined-ca-bundle\") pod \"01076e6d-3d6d-41d3-ba92-c367f1540745\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.968588 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-log-httpd\") pod \"01076e6d-3d6d-41d3-ba92-c367f1540745\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.968618 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-sg-core-conf-yaml\") pod \"01076e6d-3d6d-41d3-ba92-c367f1540745\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.968706 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-scripts\") pod \"01076e6d-3d6d-41d3-ba92-c367f1540745\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.968751 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-run-httpd\") pod \"01076e6d-3d6d-41d3-ba92-c367f1540745\" (UID: \"01076e6d-3d6d-41d3-ba92-c367f1540745\") " Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.970442 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01076e6d-3d6d-41d3-ba92-c367f1540745" (UID: "01076e6d-3d6d-41d3-ba92-c367f1540745"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.971835 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01076e6d-3d6d-41d3-ba92-c367f1540745" (UID: "01076e6d-3d6d-41d3-ba92-c367f1540745"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.973851 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01076e6d-3d6d-41d3-ba92-c367f1540745-kube-api-access-sqsqg" (OuterVolumeSpecName: "kube-api-access-sqsqg") pod "01076e6d-3d6d-41d3-ba92-c367f1540745" (UID: "01076e6d-3d6d-41d3-ba92-c367f1540745"). InnerVolumeSpecName "kube-api-access-sqsqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:48 crc kubenswrapper[4919]: I0109 13:52:48.975617 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-scripts" (OuterVolumeSpecName: "scripts") pod "01076e6d-3d6d-41d3-ba92-c367f1540745" (UID: "01076e6d-3d6d-41d3-ba92-c367f1540745"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.000729 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.006167 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01076e6d-3d6d-41d3-ba92-c367f1540745" (UID: "01076e6d-3d6d-41d3-ba92-c367f1540745"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.071072 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-nova-metadata-tls-certs\") pod \"576e43fc-19df-4204-b3b1-1b829644cbf0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.071118 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-combined-ca-bundle\") pod \"576e43fc-19df-4204-b3b1-1b829644cbf0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.071259 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-config-data\") pod \"576e43fc-19df-4204-b3b1-1b829644cbf0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.071491 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpv9m\" (UniqueName: \"kubernetes.io/projected/576e43fc-19df-4204-b3b1-1b829644cbf0-kube-api-access-zpv9m\") pod \"576e43fc-19df-4204-b3b1-1b829644cbf0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.071736 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/576e43fc-19df-4204-b3b1-1b829644cbf0-logs\") pod \"576e43fc-19df-4204-b3b1-1b829644cbf0\" (UID: \"576e43fc-19df-4204-b3b1-1b829644cbf0\") " Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.072297 4919 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.072320 4919 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.072330 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.072339 4919 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01076e6d-3d6d-41d3-ba92-c367f1540745-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.072348 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqsqg\" (UniqueName: \"kubernetes.io/projected/01076e6d-3d6d-41d3-ba92-c367f1540745-kube-api-access-sqsqg\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.072862 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/576e43fc-19df-4204-b3b1-1b829644cbf0-logs" (OuterVolumeSpecName: "logs") pod "576e43fc-19df-4204-b3b1-1b829644cbf0" (UID: "576e43fc-19df-4204-b3b1-1b829644cbf0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.074515 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01076e6d-3d6d-41d3-ba92-c367f1540745" (UID: "01076e6d-3d6d-41d3-ba92-c367f1540745"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.086511 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-config-data" (OuterVolumeSpecName: "config-data") pod "01076e6d-3d6d-41d3-ba92-c367f1540745" (UID: "01076e6d-3d6d-41d3-ba92-c367f1540745"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.086970 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576e43fc-19df-4204-b3b1-1b829644cbf0-kube-api-access-zpv9m" (OuterVolumeSpecName: "kube-api-access-zpv9m") pod "576e43fc-19df-4204-b3b1-1b829644cbf0" (UID: "576e43fc-19df-4204-b3b1-1b829644cbf0"). InnerVolumeSpecName "kube-api-access-zpv9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.099407 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-config-data" (OuterVolumeSpecName: "config-data") pod "576e43fc-19df-4204-b3b1-1b829644cbf0" (UID: "576e43fc-19df-4204-b3b1-1b829644cbf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.112273 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "576e43fc-19df-4204-b3b1-1b829644cbf0" (UID: "576e43fc-19df-4204-b3b1-1b829644cbf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.124157 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "576e43fc-19df-4204-b3b1-1b829644cbf0" (UID: "576e43fc-19df-4204-b3b1-1b829644cbf0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.174169 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/576e43fc-19df-4204-b3b1-1b829644cbf0-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.174202 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.174228 4919 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.174238 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.174248 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576e43fc-19df-4204-b3b1-1b829644cbf0-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.174258 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpv9m\" (UniqueName: \"kubernetes.io/projected/576e43fc-19df-4204-b3b1-1b829644cbf0-kube-api-access-zpv9m\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.174266 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01076e6d-3d6d-41d3-ba92-c367f1540745-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.279637 4919 generic.go:334] "Generic (PLEG): container finished" podID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerID="d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e" exitCode=0 Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.279712 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01076e6d-3d6d-41d3-ba92-c367f1540745","Type":"ContainerDied","Data":"d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e"} Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.279740 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01076e6d-3d6d-41d3-ba92-c367f1540745","Type":"ContainerDied","Data":"7f927a22cdaf2f9a28dfd891608a2b08fc365500adc4839444a222adca65708e"} Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.279757 4919 scope.go:117] "RemoveContainer" containerID="bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.279898 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.293507 4919 generic.go:334] "Generic (PLEG): container finished" podID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerID="7b34bbb55691307412bc04bc16931f1224f715d822c0f0e3034ebac9ac8238f1" exitCode=143 Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.293572 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d743b55e-cd0c-4fae-9252-0b7fdba935cb","Type":"ContainerDied","Data":"7b34bbb55691307412bc04bc16931f1224f715d822c0f0e3034ebac9ac8238f1"} Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.298375 4919 generic.go:334] "Generic (PLEG): container finished" podID="576e43fc-19df-4204-b3b1-1b829644cbf0" containerID="f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223" exitCode=0 Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.298404 4919 generic.go:334] "Generic (PLEG): container finished" podID="576e43fc-19df-4204-b3b1-1b829644cbf0" containerID="b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a" exitCode=143 Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.298575 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"576e43fc-19df-4204-b3b1-1b829644cbf0","Type":"ContainerDied","Data":"f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223"} Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.298620 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"576e43fc-19df-4204-b3b1-1b829644cbf0","Type":"ContainerDied","Data":"b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a"} Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.298633 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"576e43fc-19df-4204-b3b1-1b829644cbf0","Type":"ContainerDied","Data":"f11650efe25d1c26e3f6b761f6b2dd413d251ec96dc46a2f73ff7535c8b33588"} Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.298682 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.302396 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75dbb546bf-jtvzp" podUID="a226a0fa-ed83-40e1-933e-af4c16c363b2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: i/o timeout" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.306393 4919 scope.go:117] "RemoveContainer" containerID="da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.331240 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.335806 4919 scope.go:117] "RemoveContainer" containerID="d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.341111 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.352117 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.375549 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.388606 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.389011 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a226a0fa-ed83-40e1-933e-af4c16c363b2" containerName="dnsmasq-dns" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389029 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a226a0fa-ed83-40e1-933e-af4c16c363b2" containerName="dnsmasq-dns" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.389054 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c75bc15-846d-4551-9f91-8d16579b5e82" containerName="nova-manage" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389061 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c75bc15-846d-4551-9f91-8d16579b5e82" containerName="nova-manage" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.389069 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="sg-core" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389075 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="sg-core" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.389084 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="ceilometer-notification-agent" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389090 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="ceilometer-notification-agent" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.389101 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a226a0fa-ed83-40e1-933e-af4c16c363b2" containerName="init" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389107 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a226a0fa-ed83-40e1-933e-af4c16c363b2" containerName="init" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.389120 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="ceilometer-central-agent" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389128 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="ceilometer-central-agent" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.389139 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576e43fc-19df-4204-b3b1-1b829644cbf0" containerName="nova-metadata-metadata" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389146 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="576e43fc-19df-4204-b3b1-1b829644cbf0" containerName="nova-metadata-metadata" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.389176 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576e43fc-19df-4204-b3b1-1b829644cbf0" containerName="nova-metadata-log" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389182 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="576e43fc-19df-4204-b3b1-1b829644cbf0" containerName="nova-metadata-log" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.389191 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="proxy-httpd" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389198 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="proxy-httpd" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389375 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="ceilometer-central-agent" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389391 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="proxy-httpd" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389397 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="576e43fc-19df-4204-b3b1-1b829644cbf0" containerName="nova-metadata-metadata" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389404 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="576e43fc-19df-4204-b3b1-1b829644cbf0" containerName="nova-metadata-log" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389425 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c75bc15-846d-4551-9f91-8d16579b5e82" containerName="nova-manage" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389439 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="ceilometer-notification-agent" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389447 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" containerName="sg-core" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.389462 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="a226a0fa-ed83-40e1-933e-af4c16c363b2" containerName="dnsmasq-dns" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.391065 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.400595 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.401060 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.401919 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.405084 4919 scope.go:117] "RemoveContainer" containerID="f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.439573 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.455331 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.464124 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.471095 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.477282 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.479185 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.486481 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.486588 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.486643 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.486735 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.486808 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.486883 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b1b9107-6ac0-4e66-bbbd-11435fac4798-logs\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.486920 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78nnq\" (UniqueName: \"kubernetes.io/projected/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-kube-api-access-78nnq\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.486970 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-log-httpd\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.487071 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-run-httpd\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.487120 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-scripts\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.487161 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-config-data\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.487195 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsdrj\" (UniqueName: \"kubernetes.io/projected/8b1b9107-6ac0-4e66-bbbd-11435fac4798-kube-api-access-xsdrj\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.488375 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-config-data\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.505347 4919 scope.go:117] "RemoveContainer" containerID="bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.511443 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848\": container with ID starting with bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848 not found: ID does not exist" containerID="bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.511518 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848"} err="failed to get container status \"bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848\": rpc error: code = NotFound desc = could not find container \"bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848\": container with ID starting with bdcfa1c6ed0cbc2437efd5cca22b1b7b242c3a977b76891eb24ea46cc6437848 not found: ID does not exist" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.511556 4919 scope.go:117] "RemoveContainer" containerID="da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.512118 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50\": container with ID starting with da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50 not found: ID does not exist" containerID="da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.512143 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50"} err="failed to get container status \"da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50\": rpc error: code = NotFound desc = could not find container \"da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50\": container with ID starting with da34d5139a8bee1b580523529df89c516241a81c9e9c0f652503e2a1ecf14b50 not found: ID does not exist" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.512160 4919 scope.go:117] "RemoveContainer" containerID="d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.512443 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e\": container with ID starting with d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e not found: ID does not exist" containerID="d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.512479 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e"} err="failed to get container status \"d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e\": rpc error: code = NotFound desc = could not find container \"d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e\": container with ID starting with d0f36898849e0cc2e3e94d5c6f6de630f63d18a64212fc02cbb43284f2dcc32e not found: ID does not exist" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.512498 4919 scope.go:117] "RemoveContainer" containerID="f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.513305 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d\": container with ID starting with f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d not found: ID does not exist" containerID="f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.513344 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d"} err="failed to get container status \"f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d\": rpc error: code = NotFound desc = could not find container \"f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d\": container with ID starting with f05f828e6d1ab457f145328d459b41cf9aa8929ecbd7fd40d610d93c8a36b46d not found: ID does not exist" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.513365 4919 scope.go:117] "RemoveContainer" containerID="f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.569420 4919 scope.go:117] "RemoveContainer" containerID="b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590382 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590457 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590478 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590531 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590548 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590566 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b1b9107-6ac0-4e66-bbbd-11435fac4798-logs\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590587 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78nnq\" (UniqueName: \"kubernetes.io/projected/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-kube-api-access-78nnq\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590610 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-log-httpd\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590666 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-run-httpd\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590689 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-scripts\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590709 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-config-data\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590729 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsdrj\" (UniqueName: \"kubernetes.io/projected/8b1b9107-6ac0-4e66-bbbd-11435fac4798-kube-api-access-xsdrj\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.590806 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-config-data\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.592158 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b1b9107-6ac0-4e66-bbbd-11435fac4798-logs\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.592397 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-run-httpd\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.600186 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.601358 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.601876 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.603773 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.609988 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.610505 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-config-data\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.616312 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-scripts\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.616349 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78nnq\" (UniqueName: \"kubernetes.io/projected/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-kube-api-access-78nnq\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.616684 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-log-httpd\") pod \"ceilometer-0\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.618933 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-config-data\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.640183 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsdrj\" (UniqueName: \"kubernetes.io/projected/8b1b9107-6ac0-4e66-bbbd-11435fac4798-kube-api-access-xsdrj\") pod \"nova-metadata-0\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.701440 4919 scope.go:117] "RemoveContainer" containerID="f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.702582 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223\": container with ID starting with f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223 not found: ID does not exist" containerID="f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.702618 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223"} err="failed to get container status \"f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223\": rpc error: code = NotFound desc = could not find container \"f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223\": container with ID starting with f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223 not found: ID does not exist" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.702640 4919 scope.go:117] "RemoveContainer" containerID="b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a" Jan 09 13:52:49 crc kubenswrapper[4919]: E0109 13:52:49.702929 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a\": container with ID starting with b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a not found: ID does not exist" containerID="b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.702951 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a"} err="failed to get container status \"b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a\": rpc error: code = NotFound desc = could not find container \"b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a\": container with ID starting with b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a not found: ID does not exist" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.702964 4919 scope.go:117] "RemoveContainer" containerID="f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.703314 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223"} err="failed to get container status \"f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223\": rpc error: code = NotFound desc = could not find container \"f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223\": container with ID starting with f78743d0bd753eea3912891e5a7236cf2e560dac09decdbd15b4061d722b6223 not found: ID does not exist" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.703331 4919 scope.go:117] "RemoveContainer" containerID="b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.703564 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a"} err="failed to get container status \"b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a\": rpc error: code = NotFound desc = could not find container \"b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a\": container with ID starting with b54050b8f46320ad4d4b9026fd741923ce20022c73999f128a29ee0a4a77c70a not found: ID does not exist" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.711450 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.792689 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.797617 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.896479 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-config-data\") pod \"01ba833a-4cf7-4caf-8d94-efc794319d9a\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.898025 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f28cs\" (UniqueName: \"kubernetes.io/projected/01ba833a-4cf7-4caf-8d94-efc794319d9a-kube-api-access-f28cs\") pod \"01ba833a-4cf7-4caf-8d94-efc794319d9a\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.898061 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-combined-ca-bundle\") pod \"01ba833a-4cf7-4caf-8d94-efc794319d9a\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.898501 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-scripts\") pod \"01ba833a-4cf7-4caf-8d94-efc794319d9a\" (UID: \"01ba833a-4cf7-4caf-8d94-efc794319d9a\") " Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.905475 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-scripts" (OuterVolumeSpecName: "scripts") pod "01ba833a-4cf7-4caf-8d94-efc794319d9a" (UID: "01ba833a-4cf7-4caf-8d94-efc794319d9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.906365 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ba833a-4cf7-4caf-8d94-efc794319d9a-kube-api-access-f28cs" (OuterVolumeSpecName: "kube-api-access-f28cs") pod "01ba833a-4cf7-4caf-8d94-efc794319d9a" (UID: "01ba833a-4cf7-4caf-8d94-efc794319d9a"). InnerVolumeSpecName "kube-api-access-f28cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.943353 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-config-data" (OuterVolumeSpecName: "config-data") pod "01ba833a-4cf7-4caf-8d94-efc794319d9a" (UID: "01ba833a-4cf7-4caf-8d94-efc794319d9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:49 crc kubenswrapper[4919]: I0109 13:52:49.984746 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01ba833a-4cf7-4caf-8d94-efc794319d9a" (UID: "01ba833a-4cf7-4caf-8d94-efc794319d9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.016456 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.016496 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.016510 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f28cs\" (UniqueName: \"kubernetes.io/projected/01ba833a-4cf7-4caf-8d94-efc794319d9a-kube-api-access-f28cs\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.017563 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ba833a-4cf7-4caf-8d94-efc794319d9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.311908 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.327867 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dflxc" event={"ID":"01ba833a-4cf7-4caf-8d94-efc794319d9a","Type":"ContainerDied","Data":"b869cae772d6aa2f06ed12829426992045a5d08ce0e4dad3ec7b9845ffd3e9d9"} Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.327913 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b869cae772d6aa2f06ed12829426992045a5d08ce0e4dad3ec7b9845ffd3e9d9" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.327975 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dflxc" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.384092 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 13:52:50 crc kubenswrapper[4919]: E0109 13:52:50.384557 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ba833a-4cf7-4caf-8d94-efc794319d9a" containerName="nova-cell1-conductor-db-sync" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.384573 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ba833a-4cf7-4caf-8d94-efc794319d9a" containerName="nova-cell1-conductor-db-sync" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.384855 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ba833a-4cf7-4caf-8d94-efc794319d9a" containerName="nova-cell1-conductor-db-sync" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.385683 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.388793 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.416115 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.429245 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9crp\" (UniqueName: \"kubernetes.io/projected/8c9fed7c-6744-4cce-b80c-21ef4352ca7b-kube-api-access-n9crp\") pod \"nova-cell1-conductor-0\" (UID: \"8c9fed7c-6744-4cce-b80c-21ef4352ca7b\") " pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.429482 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9fed7c-6744-4cce-b80c-21ef4352ca7b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c9fed7c-6744-4cce-b80c-21ef4352ca7b\") " pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.429639 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9fed7c-6744-4cce-b80c-21ef4352ca7b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c9fed7c-6744-4cce-b80c-21ef4352ca7b\") " pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.492366 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:52:50 crc kubenswrapper[4919]: W0109 13:52:50.503401 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b1b9107_6ac0_4e66_bbbd_11435fac4798.slice/crio-80cdd50aa5f92f697a5746517cebbae06074c5bf690f479283800475bf7d2ef2 WatchSource:0}: Error finding container 80cdd50aa5f92f697a5746517cebbae06074c5bf690f479283800475bf7d2ef2: Status 404 returned error can't find the container with id 80cdd50aa5f92f697a5746517cebbae06074c5bf690f479283800475bf7d2ef2 Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.531470 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9crp\" (UniqueName: \"kubernetes.io/projected/8c9fed7c-6744-4cce-b80c-21ef4352ca7b-kube-api-access-n9crp\") pod \"nova-cell1-conductor-0\" (UID: \"8c9fed7c-6744-4cce-b80c-21ef4352ca7b\") " pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.531612 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9fed7c-6744-4cce-b80c-21ef4352ca7b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c9fed7c-6744-4cce-b80c-21ef4352ca7b\") " pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.531688 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9fed7c-6744-4cce-b80c-21ef4352ca7b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c9fed7c-6744-4cce-b80c-21ef4352ca7b\") " pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.536645 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9fed7c-6744-4cce-b80c-21ef4352ca7b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c9fed7c-6744-4cce-b80c-21ef4352ca7b\") " pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.536702 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9fed7c-6744-4cce-b80c-21ef4352ca7b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c9fed7c-6744-4cce-b80c-21ef4352ca7b\") " pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.574288 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9crp\" (UniqueName: \"kubernetes.io/projected/8c9fed7c-6744-4cce-b80c-21ef4352ca7b-kube-api-access-n9crp\") pod \"nova-cell1-conductor-0\" (UID: \"8c9fed7c-6744-4cce-b80c-21ef4352ca7b\") " pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.750815 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.779975 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01076e6d-3d6d-41d3-ba92-c367f1540745" path="/var/lib/kubelet/pods/01076e6d-3d6d-41d3-ba92-c367f1540745/volumes" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.780999 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576e43fc-19df-4204-b3b1-1b829644cbf0" path="/var/lib/kubelet/pods/576e43fc-19df-4204-b3b1-1b829644cbf0/volumes" Jan 09 13:52:50 crc kubenswrapper[4919]: I0109 13:52:50.944662 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.051150 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrgc4\" (UniqueName: \"kubernetes.io/projected/8cacdea9-e934-4647-b39b-073c88c9b5a8-kube-api-access-xrgc4\") pod \"8cacdea9-e934-4647-b39b-073c88c9b5a8\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.051351 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-combined-ca-bundle\") pod \"8cacdea9-e934-4647-b39b-073c88c9b5a8\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.051456 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-config-data\") pod \"8cacdea9-e934-4647-b39b-073c88c9b5a8\" (UID: \"8cacdea9-e934-4647-b39b-073c88c9b5a8\") " Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.058484 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cacdea9-e934-4647-b39b-073c88c9b5a8-kube-api-access-xrgc4" (OuterVolumeSpecName: "kube-api-access-xrgc4") pod "8cacdea9-e934-4647-b39b-073c88c9b5a8" (UID: "8cacdea9-e934-4647-b39b-073c88c9b5a8"). InnerVolumeSpecName "kube-api-access-xrgc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.158329 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrgc4\" (UniqueName: \"kubernetes.io/projected/8cacdea9-e934-4647-b39b-073c88c9b5a8-kube-api-access-xrgc4\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.175511 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-config-data" (OuterVolumeSpecName: "config-data") pod "8cacdea9-e934-4647-b39b-073c88c9b5a8" (UID: "8cacdea9-e934-4647-b39b-073c88c9b5a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.184606 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cacdea9-e934-4647-b39b-073c88c9b5a8" (UID: "8cacdea9-e934-4647-b39b-073c88c9b5a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.246653 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.246737 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.260422 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.260465 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cacdea9-e934-4647-b39b-073c88c9b5a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.298083 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 13:52:51 crc kubenswrapper[4919]: W0109 13:52:51.302636 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c9fed7c_6744_4cce_b80c_21ef4352ca7b.slice/crio-cc3fb1c406ba128189eb8a722e42f504245658533dcf37ed9da587fb13cb1a36 WatchSource:0}: Error finding container cc3fb1c406ba128189eb8a722e42f504245658533dcf37ed9da587fb13cb1a36: Status 404 returned error can't find the container with id cc3fb1c406ba128189eb8a722e42f504245658533dcf37ed9da587fb13cb1a36 Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.342124 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b1b9107-6ac0-4e66-bbbd-11435fac4798","Type":"ContainerStarted","Data":"179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7"} Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.342173 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b1b9107-6ac0-4e66-bbbd-11435fac4798","Type":"ContainerStarted","Data":"cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c"} Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.342188 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b1b9107-6ac0-4e66-bbbd-11435fac4798","Type":"ContainerStarted","Data":"80cdd50aa5f92f697a5746517cebbae06074c5bf690f479283800475bf7d2ef2"} Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.345003 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8c9fed7c-6744-4cce-b80c-21ef4352ca7b","Type":"ContainerStarted","Data":"cc3fb1c406ba128189eb8a722e42f504245658533dcf37ed9da587fb13cb1a36"} Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.347941 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7","Type":"ContainerStarted","Data":"5fa4632cf0ebf3408906c8dc56968ba0c3afd3b499c709effba7701cb53e61d3"} Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.347980 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7","Type":"ContainerStarted","Data":"81f630dc88a75cf2762b5e5f768ca97a304b8dd021b6e6d0751fd4db3581f82f"} Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.351241 4919 generic.go:334] "Generic (PLEG): container finished" podID="8cacdea9-e934-4647-b39b-073c88c9b5a8" containerID="b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8" exitCode=0 Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.351295 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.351297 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cacdea9-e934-4647-b39b-073c88c9b5a8","Type":"ContainerDied","Data":"b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8"} Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.351418 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cacdea9-e934-4647-b39b-073c88c9b5a8","Type":"ContainerDied","Data":"a5f9cbc7a13f12bbe58b6cb5681b2b32647bd4d8a4e65d6aa088221ebf223fed"} Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.351439 4919 scope.go:117] "RemoveContainer" containerID="b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.378055 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.378036549 podStartE2EDuration="2.378036549s" podCreationTimestamp="2026-01-09 13:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:52:51.371884216 +0000 UTC m=+1350.919723666" watchObservedRunningTime="2026-01-09 13:52:51.378036549 +0000 UTC m=+1350.925875999" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.382279 4919 scope.go:117] "RemoveContainer" containerID="b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8" Jan 09 13:52:51 crc kubenswrapper[4919]: E0109 13:52:51.383122 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8\": container with ID starting with b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8 not found: ID does not exist" containerID="b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.383179 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8"} err="failed to get container status \"b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8\": rpc error: code = NotFound desc = could not find container \"b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8\": container with ID starting with b9d5741cf3c4c736dbdef25a6fe6e7cf081feea1bc76a7e0bb970d82284bdbc8 not found: ID does not exist" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.406069 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.420982 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.436345 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:52:51 crc kubenswrapper[4919]: E0109 13:52:51.436952 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cacdea9-e934-4647-b39b-073c88c9b5a8" containerName="nova-scheduler-scheduler" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.436975 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cacdea9-e934-4647-b39b-073c88c9b5a8" containerName="nova-scheduler-scheduler" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.437608 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cacdea9-e934-4647-b39b-073c88c9b5a8" containerName="nova-scheduler-scheduler" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.438814 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.442495 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.450448 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.566803 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-config-data\") pod \"nova-scheduler-0\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.566889 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.566982 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdcf4\" (UniqueName: \"kubernetes.io/projected/2cfde4b8-ca80-46d0-9e92-ca9102760082-kube-api-access-vdcf4\") pod \"nova-scheduler-0\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.671981 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-config-data\") pod \"nova-scheduler-0\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.672051 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.672104 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdcf4\" (UniqueName: \"kubernetes.io/projected/2cfde4b8-ca80-46d0-9e92-ca9102760082-kube-api-access-vdcf4\") pod \"nova-scheduler-0\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.676335 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-config-data\") pod \"nova-scheduler-0\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.680780 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.690015 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdcf4\" (UniqueName: \"kubernetes.io/projected/2cfde4b8-ca80-46d0-9e92-ca9102760082-kube-api-access-vdcf4\") pod \"nova-scheduler-0\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " pod="openstack/nova-scheduler-0" Jan 09 13:52:51 crc kubenswrapper[4919]: I0109 13:52:51.792344 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 13:52:52 crc kubenswrapper[4919]: E0109 13:52:52.067450 4919 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd743b55e_cd0c_4fae_9252_0b7fdba935cb.slice/crio-conmon-83535fbcb7a353ea1e88e3fa91e6d756065b846945a82605c30f9f9932b85a6f.scope\": RecentStats: unable to find data in memory cache]" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.292708 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.377756 4919 generic.go:334] "Generic (PLEG): container finished" podID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerID="83535fbcb7a353ea1e88e3fa91e6d756065b846945a82605c30f9f9932b85a6f" exitCode=0 Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.377948 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d743b55e-cd0c-4fae-9252-0b7fdba935cb","Type":"ContainerDied","Data":"83535fbcb7a353ea1e88e3fa91e6d756065b846945a82605c30f9f9932b85a6f"} Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.380458 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8c9fed7c-6744-4cce-b80c-21ef4352ca7b","Type":"ContainerStarted","Data":"41ceea947e23e3cefa2e0d5b13e0aceeaca78fab6ef2a1a03eab4e0eeb342520"} Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.380550 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.382095 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7","Type":"ContainerStarted","Data":"93a45b44cea53beb5ebaa71c7608629e879e2649d7f41f240774f57b3f0f6d7a"} Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.386457 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfde4b8-ca80-46d0-9e92-ca9102760082","Type":"ContainerStarted","Data":"27f2d40e7dab20d6f6b71106a02551e337060f5ff2b528a856817089c9f56632"} Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.411805 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.411785102 podStartE2EDuration="2.411785102s" podCreationTimestamp="2026-01-09 13:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:52:52.403428614 +0000 UTC m=+1351.951268064" watchObservedRunningTime="2026-01-09 13:52:52.411785102 +0000 UTC m=+1351.959624552" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.520455 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.610945 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-combined-ca-bundle\") pod \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.611035 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-config-data\") pod \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.611328 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d743b55e-cd0c-4fae-9252-0b7fdba935cb-logs\") pod \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.611486 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qdrm\" (UniqueName: \"kubernetes.io/projected/d743b55e-cd0c-4fae-9252-0b7fdba935cb-kube-api-access-2qdrm\") pod \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\" (UID: \"d743b55e-cd0c-4fae-9252-0b7fdba935cb\") " Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.611927 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d743b55e-cd0c-4fae-9252-0b7fdba935cb-logs" (OuterVolumeSpecName: "logs") pod "d743b55e-cd0c-4fae-9252-0b7fdba935cb" (UID: "d743b55e-cd0c-4fae-9252-0b7fdba935cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.635588 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d743b55e-cd0c-4fae-9252-0b7fdba935cb-kube-api-access-2qdrm" (OuterVolumeSpecName: "kube-api-access-2qdrm") pod "d743b55e-cd0c-4fae-9252-0b7fdba935cb" (UID: "d743b55e-cd0c-4fae-9252-0b7fdba935cb"). InnerVolumeSpecName "kube-api-access-2qdrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.649413 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d743b55e-cd0c-4fae-9252-0b7fdba935cb" (UID: "d743b55e-cd0c-4fae-9252-0b7fdba935cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.656303 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-config-data" (OuterVolumeSpecName: "config-data") pod "d743b55e-cd0c-4fae-9252-0b7fdba935cb" (UID: "d743b55e-cd0c-4fae-9252-0b7fdba935cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.713751 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qdrm\" (UniqueName: \"kubernetes.io/projected/d743b55e-cd0c-4fae-9252-0b7fdba935cb-kube-api-access-2qdrm\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.713793 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.713807 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d743b55e-cd0c-4fae-9252-0b7fdba935cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.713820 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d743b55e-cd0c-4fae-9252-0b7fdba935cb-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:52:52 crc kubenswrapper[4919]: I0109 13:52:52.763105 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cacdea9-e934-4647-b39b-073c88c9b5a8" path="/var/lib/kubelet/pods/8cacdea9-e934-4647-b39b-073c88c9b5a8/volumes" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.400493 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7","Type":"ContainerStarted","Data":"eb3ee65ed65c478baad46326f8ec89e0b6396970ca691ed81b39bfab857f0687"} Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.403018 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfde4b8-ca80-46d0-9e92-ca9102760082","Type":"ContainerStarted","Data":"1a18feb30c2766e8e0968242218506d8d38bda78e0350697e39dcc5eb675a325"} Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.405356 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.406017 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d743b55e-cd0c-4fae-9252-0b7fdba935cb","Type":"ContainerDied","Data":"a2d1671b39299cbc5a1812203bc42177c5acb13d4dea04bc40fc920d1e419441"} Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.406064 4919 scope.go:117] "RemoveContainer" containerID="83535fbcb7a353ea1e88e3fa91e6d756065b846945a82605c30f9f9932b85a6f" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.423718 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.423700632 podStartE2EDuration="2.423700632s" podCreationTimestamp="2026-01-09 13:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:52:53.422989954 +0000 UTC m=+1352.970829424" watchObservedRunningTime="2026-01-09 13:52:53.423700632 +0000 UTC m=+1352.971540072" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.427685 4919 scope.go:117] "RemoveContainer" containerID="7b34bbb55691307412bc04bc16931f1224f715d822c0f0e3034ebac9ac8238f1" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.454474 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.463166 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.474392 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.494760 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 13:52:53 crc kubenswrapper[4919]: E0109 13:52:53.495166 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerName="nova-api-log" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.495183 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerName="nova-api-log" Jan 09 13:52:53 crc kubenswrapper[4919]: E0109 13:52:53.495223 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerName="nova-api-api" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.495231 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerName="nova-api-api" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.495438 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerName="nova-api-log" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.495453 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" containerName="nova-api-api" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.496512 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.498661 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.518253 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.636381 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.636448 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59kwj\" (UniqueName: \"kubernetes.io/projected/eeb36a2d-b775-453a-8dce-5b778571a0ce-kube-api-access-59kwj\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.636556 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb36a2d-b775-453a-8dce-5b778571a0ce-logs\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.636623 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-config-data\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.738826 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb36a2d-b775-453a-8dce-5b778571a0ce-logs\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.738899 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-config-data\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.738962 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.738994 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59kwj\" (UniqueName: \"kubernetes.io/projected/eeb36a2d-b775-453a-8dce-5b778571a0ce-kube-api-access-59kwj\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.739521 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb36a2d-b775-453a-8dce-5b778571a0ce-logs\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.744937 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.745848 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-config-data\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.768117 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59kwj\" (UniqueName: \"kubernetes.io/projected/eeb36a2d-b775-453a-8dce-5b778571a0ce-kube-api-access-59kwj\") pod \"nova-api-0\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " pod="openstack/nova-api-0" Jan 09 13:52:53 crc kubenswrapper[4919]: I0109 13:52:53.814738 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:52:54 crc kubenswrapper[4919]: I0109 13:52:54.321917 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:52:54 crc kubenswrapper[4919]: W0109 13:52:54.341331 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeb36a2d_b775_453a_8dce_5b778571a0ce.slice/crio-2c0a6bcedbdbd63a311597f09f1db6d325e6a90024138014868ab8a225e30624 WatchSource:0}: Error finding container 2c0a6bcedbdbd63a311597f09f1db6d325e6a90024138014868ab8a225e30624: Status 404 returned error can't find the container with id 2c0a6bcedbdbd63a311597f09f1db6d325e6a90024138014868ab8a225e30624 Jan 09 13:52:54 crc kubenswrapper[4919]: I0109 13:52:54.420423 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb36a2d-b775-453a-8dce-5b778571a0ce","Type":"ContainerStarted","Data":"2c0a6bcedbdbd63a311597f09f1db6d325e6a90024138014868ab8a225e30624"} Jan 09 13:52:54 crc kubenswrapper[4919]: I0109 13:52:54.423351 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7","Type":"ContainerStarted","Data":"e0a4058ea683aeb3fff05ac0dfa89f622bcfa5303afe49943922fa1bb967f91d"} Jan 09 13:52:54 crc kubenswrapper[4919]: I0109 13:52:54.423512 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 13:52:54 crc kubenswrapper[4919]: I0109 13:52:54.449380 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.792434941 podStartE2EDuration="5.449356114s" podCreationTimestamp="2026-01-09 13:52:49 +0000 UTC" firstStartedPulling="2026-01-09 13:52:50.321045197 +0000 UTC m=+1349.868884647" lastFinishedPulling="2026-01-09 13:52:53.97796637 +0000 UTC m=+1353.525805820" observedRunningTime="2026-01-09 13:52:54.444185205 +0000 UTC m=+1353.992024655" watchObservedRunningTime="2026-01-09 13:52:54.449356114 +0000 UTC m=+1353.997195564" Jan 09 13:52:54 crc kubenswrapper[4919]: I0109 13:52:54.764016 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d743b55e-cd0c-4fae-9252-0b7fdba935cb" path="/var/lib/kubelet/pods/d743b55e-cd0c-4fae-9252-0b7fdba935cb/volumes" Jan 09 13:52:54 crc kubenswrapper[4919]: I0109 13:52:54.793603 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 13:52:54 crc kubenswrapper[4919]: I0109 13:52:54.793927 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 13:52:55 crc kubenswrapper[4919]: I0109 13:52:55.439758 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb36a2d-b775-453a-8dce-5b778571a0ce","Type":"ContainerStarted","Data":"e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b"} Jan 09 13:52:55 crc kubenswrapper[4919]: I0109 13:52:55.439813 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb36a2d-b775-453a-8dce-5b778571a0ce","Type":"ContainerStarted","Data":"821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84"} Jan 09 13:52:55 crc kubenswrapper[4919]: I0109 13:52:55.466034 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.466013231 podStartE2EDuration="2.466013231s" podCreationTimestamp="2026-01-09 13:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:52:55.458731309 +0000 UTC m=+1355.006570759" watchObservedRunningTime="2026-01-09 13:52:55.466013231 +0000 UTC m=+1355.013852681" Jan 09 13:52:56 crc kubenswrapper[4919]: I0109 13:52:56.793114 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 13:52:59 crc kubenswrapper[4919]: I0109 13:52:59.793564 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 13:52:59 crc kubenswrapper[4919]: I0109 13:52:59.793637 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 13:53:00 crc kubenswrapper[4919]: I0109 13:53:00.805244 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 09 13:53:00 crc kubenswrapper[4919]: I0109 13:53:00.814479 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:00 crc kubenswrapper[4919]: I0109 13:53:00.814715 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:01 crc kubenswrapper[4919]: I0109 13:53:01.793418 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 13:53:01 crc kubenswrapper[4919]: I0109 13:53:01.837333 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 13:53:02 crc kubenswrapper[4919]: I0109 13:53:02.541147 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 13:53:03 crc kubenswrapper[4919]: I0109 13:53:03.816100 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 13:53:03 crc kubenswrapper[4919]: I0109 13:53:03.816191 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 13:53:04 crc kubenswrapper[4919]: I0109 13:53:04.898380 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:04 crc kubenswrapper[4919]: I0109 13:53:04.898380 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:09 crc kubenswrapper[4919]: I0109 13:53:09.801726 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 13:53:09 crc kubenswrapper[4919]: I0109 13:53:09.803363 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 13:53:09 crc kubenswrapper[4919]: I0109 13:53:09.810712 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 13:53:10 crc kubenswrapper[4919]: I0109 13:53:10.591695 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.309626 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.434181 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-config-data\") pod \"1594740a-2816-4a2d-81f0-d19d66a6a910\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.434741 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-combined-ca-bundle\") pod \"1594740a-2816-4a2d-81f0-d19d66a6a910\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.434990 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkxqh\" (UniqueName: \"kubernetes.io/projected/1594740a-2816-4a2d-81f0-d19d66a6a910-kube-api-access-xkxqh\") pod \"1594740a-2816-4a2d-81f0-d19d66a6a910\" (UID: \"1594740a-2816-4a2d-81f0-d19d66a6a910\") " Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.460494 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1594740a-2816-4a2d-81f0-d19d66a6a910-kube-api-access-xkxqh" (OuterVolumeSpecName: "kube-api-access-xkxqh") pod "1594740a-2816-4a2d-81f0-d19d66a6a910" (UID: "1594740a-2816-4a2d-81f0-d19d66a6a910"). InnerVolumeSpecName "kube-api-access-xkxqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.498381 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-config-data" (OuterVolumeSpecName: "config-data") pod "1594740a-2816-4a2d-81f0-d19d66a6a910" (UID: "1594740a-2816-4a2d-81f0-d19d66a6a910"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.529381 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1594740a-2816-4a2d-81f0-d19d66a6a910" (UID: "1594740a-2816-4a2d-81f0-d19d66a6a910"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.537846 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.537891 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkxqh\" (UniqueName: \"kubernetes.io/projected/1594740a-2816-4a2d-81f0-d19d66a6a910-kube-api-access-xkxqh\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.537908 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1594740a-2816-4a2d-81f0-d19d66a6a910-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.611530 4919 generic.go:334] "Generic (PLEG): container finished" podID="1594740a-2816-4a2d-81f0-d19d66a6a910" containerID="f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd" exitCode=137 Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.612347 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.612443 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1594740a-2816-4a2d-81f0-d19d66a6a910","Type":"ContainerDied","Data":"f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd"} Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.612542 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1594740a-2816-4a2d-81f0-d19d66a6a910","Type":"ContainerDied","Data":"b89c890956a46540d8f4d6429f5a4a70d41c022038101629f99f6588039127b4"} Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.612577 4919 scope.go:117] "RemoveContainer" containerID="f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.646181 4919 scope.go:117] "RemoveContainer" containerID="f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd" Jan 09 13:53:12 crc kubenswrapper[4919]: E0109 13:53:12.647115 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd\": container with ID starting with f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd not found: ID does not exist" containerID="f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.647158 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd"} err="failed to get container status \"f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd\": rpc error: code = NotFound desc = could not find container \"f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd\": container with ID starting with f87da19a09b370fb44e4e18e0d34cc3c46a1824fb799b14f70ecf4cf93d692bd not found: ID does not exist" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.684633 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.706134 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.740517 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 13:53:12 crc kubenswrapper[4919]: E0109 13:53:12.741101 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1594740a-2816-4a2d-81f0-d19d66a6a910" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.741124 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1594740a-2816-4a2d-81f0-d19d66a6a910" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.741409 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="1594740a-2816-4a2d-81f0-d19d66a6a910" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.742267 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.744742 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.745002 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.745121 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.775594 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1594740a-2816-4a2d-81f0-d19d66a6a910" path="/var/lib/kubelet/pods/1594740a-2816-4a2d-81f0-d19d66a6a910/volumes" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.776329 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.844295 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.844423 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.844671 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.844726 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lwjt\" (UniqueName: \"kubernetes.io/projected/4402784e-5d9b-4d52-86a8-57dc43cc2917-kube-api-access-8lwjt\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.844746 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.946159 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.946208 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lwjt\" (UniqueName: \"kubernetes.io/projected/4402784e-5d9b-4d52-86a8-57dc43cc2917-kube-api-access-8lwjt\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.946244 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.950026 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.950150 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.951387 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.954719 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.954861 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.956088 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4402784e-5d9b-4d52-86a8-57dc43cc2917-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:12 crc kubenswrapper[4919]: I0109 13:53:12.974457 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lwjt\" (UniqueName: \"kubernetes.io/projected/4402784e-5d9b-4d52-86a8-57dc43cc2917-kube-api-access-8lwjt\") pod \"nova-cell1-novncproxy-0\" (UID: \"4402784e-5d9b-4d52-86a8-57dc43cc2917\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:13 crc kubenswrapper[4919]: I0109 13:53:13.069613 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:13 crc kubenswrapper[4919]: I0109 13:53:13.524744 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 13:53:13 crc kubenswrapper[4919]: I0109 13:53:13.621620 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4402784e-5d9b-4d52-86a8-57dc43cc2917","Type":"ContainerStarted","Data":"a5f4a8103762fea75f1528276b23ef70bd0352235802b2745d8ac25644642dc7"} Jan 09 13:53:13 crc kubenswrapper[4919]: I0109 13:53:13.820339 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 13:53:13 crc kubenswrapper[4919]: I0109 13:53:13.821946 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 13:53:13 crc kubenswrapper[4919]: I0109 13:53:13.822162 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 13:53:13 crc kubenswrapper[4919]: I0109 13:53:13.827831 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.636398 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4402784e-5d9b-4d52-86a8-57dc43cc2917","Type":"ContainerStarted","Data":"6b60f89ee1fadaca45d45ceda3b229736cf084b8651bad3c79b141bf788b897f"} Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.636728 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.640345 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.656861 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.65683702 podStartE2EDuration="2.65683702s" podCreationTimestamp="2026-01-09 13:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:53:14.649791005 +0000 UTC m=+1374.197630465" watchObservedRunningTime="2026-01-09 13:53:14.65683702 +0000 UTC m=+1374.204676480" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.805339 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ksm4l"] Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.808081 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.836700 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ksm4l"] Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.891515 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.891616 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-config\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.891771 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpdp8\" (UniqueName: \"kubernetes.io/projected/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-kube-api-access-bpdp8\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.891821 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.891869 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.891909 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.993913 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpdp8\" (UniqueName: \"kubernetes.io/projected/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-kube-api-access-bpdp8\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.993989 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.994053 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.994087 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.995188 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.995362 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.995451 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.995507 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-config\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:14 crc kubenswrapper[4919]: I0109 13:53:14.996912 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:15 crc kubenswrapper[4919]: I0109 13:53:15.000147 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:15 crc kubenswrapper[4919]: I0109 13:53:15.000427 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-config\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:15 crc kubenswrapper[4919]: I0109 13:53:15.014329 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpdp8\" (UniqueName: \"kubernetes.io/projected/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-kube-api-access-bpdp8\") pod \"dnsmasq-dns-fcd6f8f8f-ksm4l\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:15 crc kubenswrapper[4919]: I0109 13:53:15.152195 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:15 crc kubenswrapper[4919]: I0109 13:53:15.660660 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ksm4l"] Jan 09 13:53:16 crc kubenswrapper[4919]: I0109 13:53:16.654408 4919 generic.go:334] "Generic (PLEG): container finished" podID="8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" containerID="551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995" exitCode=0 Jan 09 13:53:16 crc kubenswrapper[4919]: I0109 13:53:16.656120 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" event={"ID":"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e","Type":"ContainerDied","Data":"551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995"} Jan 09 13:53:16 crc kubenswrapper[4919]: I0109 13:53:16.656163 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" event={"ID":"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e","Type":"ContainerStarted","Data":"648d65549488cccab17af8ce162249db2a720ddf117a399846395e3e15274a81"} Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.178094 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.180404 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="ceilometer-central-agent" containerID="cri-o://5fa4632cf0ebf3408906c8dc56968ba0c3afd3b499c709effba7701cb53e61d3" gracePeriod=30 Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.180776 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="proxy-httpd" containerID="cri-o://e0a4058ea683aeb3fff05ac0dfa89f622bcfa5303afe49943922fa1bb967f91d" gracePeriod=30 Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.180814 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="sg-core" containerID="cri-o://eb3ee65ed65c478baad46326f8ec89e0b6396970ca691ed81b39bfab857f0687" gracePeriod=30 Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.180824 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="ceilometer-notification-agent" containerID="cri-o://93a45b44cea53beb5ebaa71c7608629e879e2649d7f41f240774f57b3f0f6d7a" gracePeriod=30 Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.195530 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.196:3000/\": read tcp 10.217.0.2:53230->10.217.0.196:3000: read: connection reset by peer" Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.509809 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.666443 4919 generic.go:334] "Generic (PLEG): container finished" podID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerID="e0a4058ea683aeb3fff05ac0dfa89f622bcfa5303afe49943922fa1bb967f91d" exitCode=0 Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.666479 4919 generic.go:334] "Generic (PLEG): container finished" podID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerID="eb3ee65ed65c478baad46326f8ec89e0b6396970ca691ed81b39bfab857f0687" exitCode=2 Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.666500 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7","Type":"ContainerDied","Data":"e0a4058ea683aeb3fff05ac0dfa89f622bcfa5303afe49943922fa1bb967f91d"} Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.666552 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7","Type":"ContainerDied","Data":"eb3ee65ed65c478baad46326f8ec89e0b6396970ca691ed81b39bfab857f0687"} Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.668726 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" event={"ID":"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e","Type":"ContainerStarted","Data":"a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243"} Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.668746 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerName="nova-api-log" containerID="cri-o://821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84" gracePeriod=30 Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.668967 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerName="nova-api-api" containerID="cri-o://e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b" gracePeriod=30 Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.669384 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:17 crc kubenswrapper[4919]: I0109 13:53:17.697030 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" podStartSLOduration=3.697009444 podStartE2EDuration="3.697009444s" podCreationTimestamp="2026-01-09 13:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:53:17.695478316 +0000 UTC m=+1377.243317766" watchObservedRunningTime="2026-01-09 13:53:17.697009444 +0000 UTC m=+1377.244848894" Jan 09 13:53:18 crc kubenswrapper[4919]: I0109 13:53:18.070687 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:18 crc kubenswrapper[4919]: I0109 13:53:18.683376 4919 generic.go:334] "Generic (PLEG): container finished" podID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerID="821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84" exitCode=143 Jan 09 13:53:18 crc kubenswrapper[4919]: I0109 13:53:18.684051 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb36a2d-b775-453a-8dce-5b778571a0ce","Type":"ContainerDied","Data":"821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84"} Jan 09 13:53:18 crc kubenswrapper[4919]: I0109 13:53:18.687127 4919 generic.go:334] "Generic (PLEG): container finished" podID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerID="5fa4632cf0ebf3408906c8dc56968ba0c3afd3b499c709effba7701cb53e61d3" exitCode=0 Jan 09 13:53:18 crc kubenswrapper[4919]: I0109 13:53:18.688156 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7","Type":"ContainerDied","Data":"5fa4632cf0ebf3408906c8dc56968ba0c3afd3b499c709effba7701cb53e61d3"} Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.698575 4919 generic.go:334] "Generic (PLEG): container finished" podID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerID="93a45b44cea53beb5ebaa71c7608629e879e2649d7f41f240774f57b3f0f6d7a" exitCode=0 Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.698648 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7","Type":"ContainerDied","Data":"93a45b44cea53beb5ebaa71c7608629e879e2649d7f41f240774f57b3f0f6d7a"} Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.815799 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.903300 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-config-data\") pod \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.903415 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-run-httpd\") pod \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.903441 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-sg-core-conf-yaml\") pod \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.903462 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78nnq\" (UniqueName: \"kubernetes.io/projected/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-kube-api-access-78nnq\") pod \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.903569 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-log-httpd\") pod \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.903669 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-combined-ca-bundle\") pod \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.903703 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-scripts\") pod \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.903733 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-ceilometer-tls-certs\") pod \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\" (UID: \"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7\") " Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.904061 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" (UID: "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.904454 4919 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.904552 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" (UID: "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.918633 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-kube-api-access-78nnq" (OuterVolumeSpecName: "kube-api-access-78nnq") pod "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" (UID: "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7"). InnerVolumeSpecName "kube-api-access-78nnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.924105 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-scripts" (OuterVolumeSpecName: "scripts") pod "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" (UID: "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.935718 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" (UID: "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.970018 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" (UID: "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:19 crc kubenswrapper[4919]: I0109 13:53:19.995142 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" (UID: "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.006606 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.006796 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.006808 4919 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.006818 4919 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.006826 4919 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.006835 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78nnq\" (UniqueName: \"kubernetes.io/projected/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-kube-api-access-78nnq\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.029575 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-config-data" (OuterVolumeSpecName: "config-data") pod "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" (UID: "2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.108970 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.711025 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7","Type":"ContainerDied","Data":"81f630dc88a75cf2762b5e5f768ca97a304b8dd021b6e6d0751fd4db3581f82f"} Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.711078 4919 scope.go:117] "RemoveContainer" containerID="e0a4058ea683aeb3fff05ac0dfa89f622bcfa5303afe49943922fa1bb967f91d" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.711280 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.747606 4919 scope.go:117] "RemoveContainer" containerID="eb3ee65ed65c478baad46326f8ec89e0b6396970ca691ed81b39bfab857f0687" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.749355 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.773667 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.782079 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:20 crc kubenswrapper[4919]: E0109 13:53:20.786632 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="proxy-httpd" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.786674 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="proxy-httpd" Jan 09 13:53:20 crc kubenswrapper[4919]: E0109 13:53:20.786702 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="sg-core" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.786711 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="sg-core" Jan 09 13:53:20 crc kubenswrapper[4919]: E0109 13:53:20.786740 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="ceilometer-central-agent" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.786749 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="ceilometer-central-agent" Jan 09 13:53:20 crc kubenswrapper[4919]: E0109 13:53:20.786763 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="ceilometer-notification-agent" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.786773 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="ceilometer-notification-agent" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.787017 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="proxy-httpd" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.787035 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="ceilometer-notification-agent" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.787048 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="sg-core" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.787060 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="ceilometer-central-agent" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.804441 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.812777 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.817167 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.817462 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.828114 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-run-httpd\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.830014 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-scripts\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.830320 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh6w9\" (UniqueName: \"kubernetes.io/projected/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-kube-api-access-dh6w9\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.831978 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-log-httpd\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.832032 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.832101 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-config-data\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.832148 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.832198 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.833733 4919 scope.go:117] "RemoveContainer" containerID="93a45b44cea53beb5ebaa71c7608629e879e2649d7f41f240774f57b3f0f6d7a" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.842611 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.866929 4919 scope.go:117] "RemoveContainer" containerID="5fa4632cf0ebf3408906c8dc56968ba0c3afd3b499c709effba7701cb53e61d3" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.935343 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-run-httpd\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.935418 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-scripts\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.935507 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh6w9\" (UniqueName: \"kubernetes.io/projected/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-kube-api-access-dh6w9\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.935588 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-log-httpd\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.935609 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.935638 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-config-data\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.935666 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.935700 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.937179 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-log-httpd\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.937582 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-run-httpd\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.943072 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.943853 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.945459 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-scripts\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.946727 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.947347 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-config-data\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:20 crc kubenswrapper[4919]: I0109 13:53:20.955924 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh6w9\" (UniqueName: \"kubernetes.io/projected/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-kube-api-access-dh6w9\") pod \"ceilometer-0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " pod="openstack/ceilometer-0" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.118647 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.119906 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.247121 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.247186 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.333689 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.447347 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb36a2d-b775-453a-8dce-5b778571a0ce-logs\") pod \"eeb36a2d-b775-453a-8dce-5b778571a0ce\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.447414 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-config-data\") pod \"eeb36a2d-b775-453a-8dce-5b778571a0ce\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.447437 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-combined-ca-bundle\") pod \"eeb36a2d-b775-453a-8dce-5b778571a0ce\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.447568 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59kwj\" (UniqueName: \"kubernetes.io/projected/eeb36a2d-b775-453a-8dce-5b778571a0ce-kube-api-access-59kwj\") pod \"eeb36a2d-b775-453a-8dce-5b778571a0ce\" (UID: \"eeb36a2d-b775-453a-8dce-5b778571a0ce\") " Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.448059 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb36a2d-b775-453a-8dce-5b778571a0ce-logs" (OuterVolumeSpecName: "logs") pod "eeb36a2d-b775-453a-8dce-5b778571a0ce" (UID: "eeb36a2d-b775-453a-8dce-5b778571a0ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.453774 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb36a2d-b775-453a-8dce-5b778571a0ce-kube-api-access-59kwj" (OuterVolumeSpecName: "kube-api-access-59kwj") pod "eeb36a2d-b775-453a-8dce-5b778571a0ce" (UID: "eeb36a2d-b775-453a-8dce-5b778571a0ce"). InnerVolumeSpecName "kube-api-access-59kwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.479259 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-config-data" (OuterVolumeSpecName: "config-data") pod "eeb36a2d-b775-453a-8dce-5b778571a0ce" (UID: "eeb36a2d-b775-453a-8dce-5b778571a0ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.482473 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeb36a2d-b775-453a-8dce-5b778571a0ce" (UID: "eeb36a2d-b775-453a-8dce-5b778571a0ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.549617 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59kwj\" (UniqueName: \"kubernetes.io/projected/eeb36a2d-b775-453a-8dce-5b778571a0ce-kube-api-access-59kwj\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.549646 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeb36a2d-b775-453a-8dce-5b778571a0ce-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.549658 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.549668 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb36a2d-b775-453a-8dce-5b778571a0ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:21 crc kubenswrapper[4919]: W0109 13:53:21.641390 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61cf4f92_cc7e_4bb2_a0a9_a0690775b3d0.slice/crio-039a53a16289f09eadcb8f188c652f8fd073c855b4cee8bc9ae427ce89bfe29b WatchSource:0}: Error finding container 039a53a16289f09eadcb8f188c652f8fd073c855b4cee8bc9ae427ce89bfe29b: Status 404 returned error can't find the container with id 039a53a16289f09eadcb8f188c652f8fd073c855b4cee8bc9ae427ce89bfe29b Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.645898 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.723162 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0","Type":"ContainerStarted","Data":"039a53a16289f09eadcb8f188c652f8fd073c855b4cee8bc9ae427ce89bfe29b"} Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.725272 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.725266 4919 generic.go:334] "Generic (PLEG): container finished" podID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerID="e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b" exitCode=0 Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.725245 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb36a2d-b775-453a-8dce-5b778571a0ce","Type":"ContainerDied","Data":"e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b"} Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.725777 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eeb36a2d-b775-453a-8dce-5b778571a0ce","Type":"ContainerDied","Data":"2c0a6bcedbdbd63a311597f09f1db6d325e6a90024138014868ab8a225e30624"} Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.725823 4919 scope.go:117] "RemoveContainer" containerID="e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.759445 4919 scope.go:117] "RemoveContainer" containerID="821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.775341 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.788007 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.803693 4919 scope.go:117] "RemoveContainer" containerID="e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b" Jan 09 13:53:21 crc kubenswrapper[4919]: E0109 13:53:21.815377 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b\": container with ID starting with e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b not found: ID does not exist" containerID="e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.815432 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b"} err="failed to get container status \"e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b\": rpc error: code = NotFound desc = could not find container \"e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b\": container with ID starting with e923e93141d517f776be2665386b59270b135d753040ffe2b04bf46f2b078f8b not found: ID does not exist" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.815460 4919 scope.go:117] "RemoveContainer" containerID="821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84" Jan 09 13:53:21 crc kubenswrapper[4919]: E0109 13:53:21.822642 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84\": container with ID starting with 821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84 not found: ID does not exist" containerID="821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.822686 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84"} err="failed to get container status \"821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84\": rpc error: code = NotFound desc = could not find container \"821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84\": container with ID starting with 821686b6fc4af8a94d1a889b401b5985f8165f7a3a4a1c132676f33163dd8a84 not found: ID does not exist" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.838293 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:21 crc kubenswrapper[4919]: E0109 13:53:21.838849 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerName="nova-api-log" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.838871 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerName="nova-api-log" Jan 09 13:53:21 crc kubenswrapper[4919]: E0109 13:53:21.838908 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerName="nova-api-api" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.838914 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerName="nova-api-api" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.839136 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerName="nova-api-api" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.839171 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" containerName="nova-api-log" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.840317 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.848530 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.848755 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.848873 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.872495 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.962965 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-config-data\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.963502 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-public-tls-certs\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.963623 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.963784 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbdkr\" (UniqueName: \"kubernetes.io/projected/1002a2ff-2366-4c32-b1cd-ad66959e6c39-kube-api-access-rbdkr\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.963987 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:21 crc kubenswrapper[4919]: I0109 13:53:21.964140 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1002a2ff-2366-4c32-b1cd-ad66959e6c39-logs\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.065726 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1002a2ff-2366-4c32-b1cd-ad66959e6c39-logs\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.065798 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-config-data\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.065889 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-public-tls-certs\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.065915 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.065962 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbdkr\" (UniqueName: \"kubernetes.io/projected/1002a2ff-2366-4c32-b1cd-ad66959e6c39-kube-api-access-rbdkr\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.066022 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.067078 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1002a2ff-2366-4c32-b1cd-ad66959e6c39-logs\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.071895 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-public-tls-certs\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.071969 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.074135 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-config-data\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.088184 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbdkr\" (UniqueName: \"kubernetes.io/projected/1002a2ff-2366-4c32-b1cd-ad66959e6c39-kube-api-access-rbdkr\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.091027 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.187283 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.675716 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:22 crc kubenswrapper[4919]: W0109 13:53:22.681020 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1002a2ff_2366_4c32_b1cd_ad66959e6c39.slice/crio-3f6594ee1e167da4f21acf3fb5e2de15f6218eb77ac994a6cadf1769e5f29cf8 WatchSource:0}: Error finding container 3f6594ee1e167da4f21acf3fb5e2de15f6218eb77ac994a6cadf1769e5f29cf8: Status 404 returned error can't find the container with id 3f6594ee1e167da4f21acf3fb5e2de15f6218eb77ac994a6cadf1769e5f29cf8 Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.735974 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1002a2ff-2366-4c32-b1cd-ad66959e6c39","Type":"ContainerStarted","Data":"3f6594ee1e167da4f21acf3fb5e2de15f6218eb77ac994a6cadf1769e5f29cf8"} Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.739084 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0","Type":"ContainerStarted","Data":"e80d65fc9725cce982a7a94493672d519df1a6ffced883e9115035b5b7b58c75"} Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.763887 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" path="/var/lib/kubelet/pods/2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7/volumes" Jan 09 13:53:22 crc kubenswrapper[4919]: I0109 13:53:22.766071 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb36a2d-b775-453a-8dce-5b778571a0ce" path="/var/lib/kubelet/pods/eeb36a2d-b775-453a-8dce-5b778571a0ce/volumes" Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.070488 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.090472 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.752453 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1002a2ff-2366-4c32-b1cd-ad66959e6c39","Type":"ContainerStarted","Data":"5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2"} Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.752716 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1002a2ff-2366-4c32-b1cd-ad66959e6c39","Type":"ContainerStarted","Data":"914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d"} Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.759083 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0","Type":"ContainerStarted","Data":"a3175274fbe2db5bd3a270ae471dfca2edce0cdc8f3bf4aa5975e424e656f678"} Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.783610 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.783589654 podStartE2EDuration="2.783589654s" podCreationTimestamp="2026-01-09 13:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:53:23.775609396 +0000 UTC m=+1383.323448846" watchObservedRunningTime="2026-01-09 13:53:23.783589654 +0000 UTC m=+1383.331429094" Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.789611 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.961285 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-svt5n"] Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.962928 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.965902 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.966091 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 09 13:53:23 crc kubenswrapper[4919]: I0109 13:53:23.968723 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-svt5n"] Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.108010 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h8ns\" (UniqueName: \"kubernetes.io/projected/c794cf2c-22d5-44dc-8bff-4bbdaca37867-kube-api-access-5h8ns\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.108297 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-scripts\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.108662 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.108696 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-config-data\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.210419 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.210480 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-config-data\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.210521 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h8ns\" (UniqueName: \"kubernetes.io/projected/c794cf2c-22d5-44dc-8bff-4bbdaca37867-kube-api-access-5h8ns\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.210607 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-scripts\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.215340 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-config-data\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.215438 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-scripts\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.224013 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.232871 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h8ns\" (UniqueName: \"kubernetes.io/projected/c794cf2c-22d5-44dc-8bff-4bbdaca37867-kube-api-access-5h8ns\") pod \"nova-cell1-cell-mapping-svt5n\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.352929 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.771568 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0","Type":"ContainerStarted","Data":"6c22e10f7012b4049ecb61c8d2ef404ef48eedc4d44b6e4371f43ed207b04390"} Jan 09 13:53:24 crc kubenswrapper[4919]: I0109 13:53:24.895113 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-svt5n"] Jan 09 13:53:24 crc kubenswrapper[4919]: W0109 13:53:24.897686 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc794cf2c_22d5_44dc_8bff_4bbdaca37867.slice/crio-24dcc579929406fb92b8b5558a4233b25dfe0c35e3f3d82a6268225b9a3b1921 WatchSource:0}: Error finding container 24dcc579929406fb92b8b5558a4233b25dfe0c35e3f3d82a6268225b9a3b1921: Status 404 returned error can't find the container with id 24dcc579929406fb92b8b5558a4233b25dfe0c35e3f3d82a6268225b9a3b1921 Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.154378 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.226234 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-8qq6l"] Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.227744 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" podUID="966e8c47-7429-4435-87ce-44cc8af93cea" containerName="dnsmasq-dns" containerID="cri-o://b6cb1d44367919425eea2102b3654b31d3c246b70a01a2fee47786cb03607d8c" gracePeriod=10 Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.785702 4919 generic.go:334] "Generic (PLEG): container finished" podID="966e8c47-7429-4435-87ce-44cc8af93cea" containerID="b6cb1d44367919425eea2102b3654b31d3c246b70a01a2fee47786cb03607d8c" exitCode=0 Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.785771 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" event={"ID":"966e8c47-7429-4435-87ce-44cc8af93cea","Type":"ContainerDied","Data":"b6cb1d44367919425eea2102b3654b31d3c246b70a01a2fee47786cb03607d8c"} Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.788726 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-svt5n" event={"ID":"c794cf2c-22d5-44dc-8bff-4bbdaca37867","Type":"ContainerStarted","Data":"680956d31dffa408279009281653daf482a7f7880c7fd4f97bdb24069dcfbc95"} Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.788764 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-svt5n" event={"ID":"c794cf2c-22d5-44dc-8bff-4bbdaca37867","Type":"ContainerStarted","Data":"24dcc579929406fb92b8b5558a4233b25dfe0c35e3f3d82a6268225b9a3b1921"} Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.807710 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-svt5n" podStartSLOduration=2.807688594 podStartE2EDuration="2.807688594s" podCreationTimestamp="2026-01-09 13:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:53:25.804364712 +0000 UTC m=+1385.352204172" watchObservedRunningTime="2026-01-09 13:53:25.807688594 +0000 UTC m=+1385.355528044" Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.833957 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.972175 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-svc\") pod \"966e8c47-7429-4435-87ce-44cc8af93cea\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.972795 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-config\") pod \"966e8c47-7429-4435-87ce-44cc8af93cea\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.973039 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2tbb\" (UniqueName: \"kubernetes.io/projected/966e8c47-7429-4435-87ce-44cc8af93cea-kube-api-access-r2tbb\") pod \"966e8c47-7429-4435-87ce-44cc8af93cea\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.973289 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-swift-storage-0\") pod \"966e8c47-7429-4435-87ce-44cc8af93cea\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.973377 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-nb\") pod \"966e8c47-7429-4435-87ce-44cc8af93cea\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.973481 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-sb\") pod \"966e8c47-7429-4435-87ce-44cc8af93cea\" (UID: \"966e8c47-7429-4435-87ce-44cc8af93cea\") " Jan 09 13:53:25 crc kubenswrapper[4919]: I0109 13:53:25.994456 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966e8c47-7429-4435-87ce-44cc8af93cea-kube-api-access-r2tbb" (OuterVolumeSpecName: "kube-api-access-r2tbb") pod "966e8c47-7429-4435-87ce-44cc8af93cea" (UID: "966e8c47-7429-4435-87ce-44cc8af93cea"). InnerVolumeSpecName "kube-api-access-r2tbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.056162 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "966e8c47-7429-4435-87ce-44cc8af93cea" (UID: "966e8c47-7429-4435-87ce-44cc8af93cea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.056177 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "966e8c47-7429-4435-87ce-44cc8af93cea" (UID: "966e8c47-7429-4435-87ce-44cc8af93cea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.057355 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-config" (OuterVolumeSpecName: "config") pod "966e8c47-7429-4435-87ce-44cc8af93cea" (UID: "966e8c47-7429-4435-87ce-44cc8af93cea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.072379 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "966e8c47-7429-4435-87ce-44cc8af93cea" (UID: "966e8c47-7429-4435-87ce-44cc8af93cea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.077445 4919 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.077483 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.077494 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.077506 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.077515 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2tbb\" (UniqueName: \"kubernetes.io/projected/966e8c47-7429-4435-87ce-44cc8af93cea-kube-api-access-r2tbb\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.086710 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "966e8c47-7429-4435-87ce-44cc8af93cea" (UID: "966e8c47-7429-4435-87ce-44cc8af93cea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.179765 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/966e8c47-7429-4435-87ce-44cc8af93cea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.820566 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" event={"ID":"966e8c47-7429-4435-87ce-44cc8af93cea","Type":"ContainerDied","Data":"c25897274bcba55679fe1d8bb28cf4b848606ec47fd640ccbeb8ed7eefb48239"} Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.820620 4919 scope.go:117] "RemoveContainer" containerID="b6cb1d44367919425eea2102b3654b31d3c246b70a01a2fee47786cb03607d8c" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.820750 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-8qq6l" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.845403 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0","Type":"ContainerStarted","Data":"e15c81229587d2c4bdb97213d411afaa951c8117e0ca0bb082dcd1ab3c0bafd8"} Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.845572 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="ceilometer-central-agent" containerID="cri-o://e80d65fc9725cce982a7a94493672d519df1a6ffced883e9115035b5b7b58c75" gracePeriod=30 Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.845704 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="proxy-httpd" containerID="cri-o://e15c81229587d2c4bdb97213d411afaa951c8117e0ca0bb082dcd1ab3c0bafd8" gracePeriod=30 Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.845742 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="sg-core" containerID="cri-o://6c22e10f7012b4049ecb61c8d2ef404ef48eedc4d44b6e4371f43ed207b04390" gracePeriod=30 Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.845770 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="ceilometer-notification-agent" containerID="cri-o://a3175274fbe2db5bd3a270ae471dfca2edce0cdc8f3bf4aa5975e424e656f678" gracePeriod=30 Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.856223 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-8qq6l"] Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.869519 4919 scope.go:117] "RemoveContainer" containerID="c5bbe5e8cc3b33e01382a61deac2b6e1e7eb9b6b458d0e098ba33f94d58dca51" Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.885496 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-8qq6l"] Jan 09 13:53:26 crc kubenswrapper[4919]: I0109 13:53:26.892114 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.078650559 podStartE2EDuration="6.892082021s" podCreationTimestamp="2026-01-09 13:53:20 +0000 UTC" firstStartedPulling="2026-01-09 13:53:21.644964368 +0000 UTC m=+1381.192803818" lastFinishedPulling="2026-01-09 13:53:25.45839583 +0000 UTC m=+1385.006235280" observedRunningTime="2026-01-09 13:53:26.885464918 +0000 UTC m=+1386.433304368" watchObservedRunningTime="2026-01-09 13:53:26.892082021 +0000 UTC m=+1386.439921471" Jan 09 13:53:27 crc kubenswrapper[4919]: I0109 13:53:27.858985 4919 generic.go:334] "Generic (PLEG): container finished" podID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerID="e15c81229587d2c4bdb97213d411afaa951c8117e0ca0bb082dcd1ab3c0bafd8" exitCode=0 Jan 09 13:53:27 crc kubenswrapper[4919]: I0109 13:53:27.859300 4919 generic.go:334] "Generic (PLEG): container finished" podID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerID="6c22e10f7012b4049ecb61c8d2ef404ef48eedc4d44b6e4371f43ed207b04390" exitCode=2 Jan 09 13:53:27 crc kubenswrapper[4919]: I0109 13:53:27.859077 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0","Type":"ContainerDied","Data":"e15c81229587d2c4bdb97213d411afaa951c8117e0ca0bb082dcd1ab3c0bafd8"} Jan 09 13:53:27 crc kubenswrapper[4919]: I0109 13:53:27.859355 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0","Type":"ContainerDied","Data":"6c22e10f7012b4049ecb61c8d2ef404ef48eedc4d44b6e4371f43ed207b04390"} Jan 09 13:53:27 crc kubenswrapper[4919]: I0109 13:53:27.859368 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0","Type":"ContainerDied","Data":"a3175274fbe2db5bd3a270ae471dfca2edce0cdc8f3bf4aa5975e424e656f678"} Jan 09 13:53:27 crc kubenswrapper[4919]: I0109 13:53:27.859311 4919 generic.go:334] "Generic (PLEG): container finished" podID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerID="a3175274fbe2db5bd3a270ae471dfca2edce0cdc8f3bf4aa5975e424e656f678" exitCode=0 Jan 09 13:53:28 crc kubenswrapper[4919]: I0109 13:53:28.765591 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966e8c47-7429-4435-87ce-44cc8af93cea" path="/var/lib/kubelet/pods/966e8c47-7429-4435-87ce-44cc8af93cea/volumes" Jan 09 13:53:28 crc kubenswrapper[4919]: I0109 13:53:28.883462 4919 generic.go:334] "Generic (PLEG): container finished" podID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerID="e80d65fc9725cce982a7a94493672d519df1a6ffced883e9115035b5b7b58c75" exitCode=0 Jan 09 13:53:28 crc kubenswrapper[4919]: I0109 13:53:28.883568 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0","Type":"ContainerDied","Data":"e80d65fc9725cce982a7a94493672d519df1a6ffced883e9115035b5b7b58c75"} Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.179239 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.349080 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-combined-ca-bundle\") pod \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.349196 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-sg-core-conf-yaml\") pod \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.349336 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-log-httpd\") pod \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.349390 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-run-httpd\") pod \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.349442 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh6w9\" (UniqueName: \"kubernetes.io/projected/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-kube-api-access-dh6w9\") pod \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.349496 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-config-data\") pod \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.349566 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-scripts\") pod \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.349589 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-ceilometer-tls-certs\") pod \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\" (UID: \"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0\") " Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.350295 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" (UID: "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.352128 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" (UID: "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.355599 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-scripts" (OuterVolumeSpecName: "scripts") pod "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" (UID: "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.355620 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-kube-api-access-dh6w9" (OuterVolumeSpecName: "kube-api-access-dh6w9") pod "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" (UID: "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0"). InnerVolumeSpecName "kube-api-access-dh6w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.413107 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" (UID: "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.421928 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" (UID: "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.451994 4919 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.452026 4919 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.452035 4919 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.452046 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh6w9\" (UniqueName: \"kubernetes.io/projected/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-kube-api-access-dh6w9\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.452056 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.452065 4919 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.460592 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" (UID: "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.486010 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-config-data" (OuterVolumeSpecName: "config-data") pod "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" (UID: "61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.554656 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.554698 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.897290 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0","Type":"ContainerDied","Data":"039a53a16289f09eadcb8f188c652f8fd073c855b4cee8bc9ae427ce89bfe29b"} Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.897394 4919 scope.go:117] "RemoveContainer" containerID="e15c81229587d2c4bdb97213d411afaa951c8117e0ca0bb082dcd1ab3c0bafd8" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.897325 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.916161 4919 scope.go:117] "RemoveContainer" containerID="6c22e10f7012b4049ecb61c8d2ef404ef48eedc4d44b6e4371f43ed207b04390" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.952436 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.954824 4919 scope.go:117] "RemoveContainer" containerID="a3175274fbe2db5bd3a270ae471dfca2edce0cdc8f3bf4aa5975e424e656f678" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.968673 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.995682 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:29 crc kubenswrapper[4919]: E0109 13:53:29.996316 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="proxy-httpd" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.996346 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="proxy-httpd" Jan 09 13:53:29 crc kubenswrapper[4919]: E0109 13:53:29.996368 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="ceilometer-central-agent" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.996377 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="ceilometer-central-agent" Jan 09 13:53:29 crc kubenswrapper[4919]: E0109 13:53:29.996407 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966e8c47-7429-4435-87ce-44cc8af93cea" containerName="init" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.996415 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="966e8c47-7429-4435-87ce-44cc8af93cea" containerName="init" Jan 09 13:53:29 crc kubenswrapper[4919]: E0109 13:53:29.996436 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="ceilometer-notification-agent" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.996448 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="ceilometer-notification-agent" Jan 09 13:53:29 crc kubenswrapper[4919]: E0109 13:53:29.996465 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="sg-core" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.996473 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="sg-core" Jan 09 13:53:29 crc kubenswrapper[4919]: E0109 13:53:29.996491 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966e8c47-7429-4435-87ce-44cc8af93cea" containerName="dnsmasq-dns" Jan 09 13:53:29 crc kubenswrapper[4919]: I0109 13:53:29.996500 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="966e8c47-7429-4435-87ce-44cc8af93cea" containerName="dnsmasq-dns" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:29.996783 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="proxy-httpd" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:29.996814 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="ceilometer-central-agent" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:29.996825 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="966e8c47-7429-4435-87ce-44cc8af93cea" containerName="dnsmasq-dns" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:29.996840 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="sg-core" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:29.996852 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" containerName="ceilometer-notification-agent" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:29.999159 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.002347 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.002361 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.003810 4919 scope.go:117] "RemoveContainer" containerID="e80d65fc9725cce982a7a94493672d519df1a6ffced883e9115035b5b7b58c75" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.004135 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.009952 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.167960 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.168076 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-scripts\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.168317 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-config-data\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.168385 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.168469 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c31d277-b08a-41e0-9f01-95ea17af82f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.168539 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.168906 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2k75\" (UniqueName: \"kubernetes.io/projected/2c31d277-b08a-41e0-9f01-95ea17af82f4-kube-api-access-x2k75\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.168993 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c31d277-b08a-41e0-9f01-95ea17af82f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.270772 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c31d277-b08a-41e0-9f01-95ea17af82f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.270841 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.270865 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-scripts\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.270922 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-config-data\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.270947 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.270976 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c31d277-b08a-41e0-9f01-95ea17af82f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.271004 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.271065 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2k75\" (UniqueName: \"kubernetes.io/projected/2c31d277-b08a-41e0-9f01-95ea17af82f4-kube-api-access-x2k75\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.271840 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c31d277-b08a-41e0-9f01-95ea17af82f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.271890 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c31d277-b08a-41e0-9f01-95ea17af82f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.275143 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.275700 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.275721 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-config-data\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.276189 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.282853 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c31d277-b08a-41e0-9f01-95ea17af82f4-scripts\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.289730 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2k75\" (UniqueName: \"kubernetes.io/projected/2c31d277-b08a-41e0-9f01-95ea17af82f4-kube-api-access-x2k75\") pod \"ceilometer-0\" (UID: \"2c31d277-b08a-41e0-9f01-95ea17af82f4\") " pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.330687 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.763647 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0" path="/var/lib/kubelet/pods/61cf4f92-cc7e-4bb2-a0a9-a0690775b3d0/volumes" Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.848904 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 13:53:30 crc kubenswrapper[4919]: I0109 13:53:30.919753 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c31d277-b08a-41e0-9f01-95ea17af82f4","Type":"ContainerStarted","Data":"1976fecf43c044d53bb187c7eaa2f3058be8c7f39d32f658023161d0b0a4951e"} Jan 09 13:53:31 crc kubenswrapper[4919]: I0109 13:53:31.931117 4919 generic.go:334] "Generic (PLEG): container finished" podID="c794cf2c-22d5-44dc-8bff-4bbdaca37867" containerID="680956d31dffa408279009281653daf482a7f7880c7fd4f97bdb24069dcfbc95" exitCode=0 Jan 09 13:53:31 crc kubenswrapper[4919]: I0109 13:53:31.931230 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-svt5n" event={"ID":"c794cf2c-22d5-44dc-8bff-4bbdaca37867","Type":"ContainerDied","Data":"680956d31dffa408279009281653daf482a7f7880c7fd4f97bdb24069dcfbc95"} Jan 09 13:53:31 crc kubenswrapper[4919]: I0109 13:53:31.933558 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c31d277-b08a-41e0-9f01-95ea17af82f4","Type":"ContainerStarted","Data":"81ea73c8585dafec3c15143dd0859622b6a6e7de403b12a021e0c477fffe8fb3"} Jan 09 13:53:32 crc kubenswrapper[4919]: I0109 13:53:32.188601 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 13:53:32 crc kubenswrapper[4919]: I0109 13:53:32.188654 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 13:53:32 crc kubenswrapper[4919]: I0109 13:53:32.959628 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c31d277-b08a-41e0-9f01-95ea17af82f4","Type":"ContainerStarted","Data":"8ea0f5f159781c629511ad5a956f3a45f2c8af1e129f149d919c459a143c8dbb"} Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.213369 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.213458 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.520956 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.557633 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-scripts\") pod \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.557705 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h8ns\" (UniqueName: \"kubernetes.io/projected/c794cf2c-22d5-44dc-8bff-4bbdaca37867-kube-api-access-5h8ns\") pod \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.557740 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-config-data\") pod \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.557829 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-combined-ca-bundle\") pod \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\" (UID: \"c794cf2c-22d5-44dc-8bff-4bbdaca37867\") " Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.589532 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c794cf2c-22d5-44dc-8bff-4bbdaca37867-kube-api-access-5h8ns" (OuterVolumeSpecName: "kube-api-access-5h8ns") pod "c794cf2c-22d5-44dc-8bff-4bbdaca37867" (UID: "c794cf2c-22d5-44dc-8bff-4bbdaca37867"). InnerVolumeSpecName "kube-api-access-5h8ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.596200 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-scripts" (OuterVolumeSpecName: "scripts") pod "c794cf2c-22d5-44dc-8bff-4bbdaca37867" (UID: "c794cf2c-22d5-44dc-8bff-4bbdaca37867"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.608244 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c794cf2c-22d5-44dc-8bff-4bbdaca37867" (UID: "c794cf2c-22d5-44dc-8bff-4bbdaca37867"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.626320 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-config-data" (OuterVolumeSpecName: "config-data") pod "c794cf2c-22d5-44dc-8bff-4bbdaca37867" (UID: "c794cf2c-22d5-44dc-8bff-4bbdaca37867"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.659531 4919 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.659591 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h8ns\" (UniqueName: \"kubernetes.io/projected/c794cf2c-22d5-44dc-8bff-4bbdaca37867-kube-api-access-5h8ns\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.659606 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.659620 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c794cf2c-22d5-44dc-8bff-4bbdaca37867-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.972107 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c31d277-b08a-41e0-9f01-95ea17af82f4","Type":"ContainerStarted","Data":"69a1793b2209bf1992bf8e0bcc273f9fba7580cca30f6c2909c733ce72925510"} Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.979196 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-svt5n" event={"ID":"c794cf2c-22d5-44dc-8bff-4bbdaca37867","Type":"ContainerDied","Data":"24dcc579929406fb92b8b5558a4233b25dfe0c35e3f3d82a6268225b9a3b1921"} Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.979246 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24dcc579929406fb92b8b5558a4233b25dfe0c35e3f3d82a6268225b9a3b1921" Jan 09 13:53:33 crc kubenswrapper[4919]: I0109 13:53:33.979302 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-svt5n" Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.147967 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.148600 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerName="nova-api-log" containerID="cri-o://914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d" gracePeriod=30 Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.148692 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerName="nova-api-api" containerID="cri-o://5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2" gracePeriod=30 Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.163787 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.164042 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2cfde4b8-ca80-46d0-9e92-ca9102760082" containerName="nova-scheduler-scheduler" containerID="cri-o://1a18feb30c2766e8e0968242218506d8d38bda78e0350697e39dcc5eb675a325" gracePeriod=30 Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.222843 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.223660 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-metadata" containerID="cri-o://179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7" gracePeriod=30 Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.223202 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-log" containerID="cri-o://cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c" gracePeriod=30 Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.988897 4919 generic.go:334] "Generic (PLEG): container finished" podID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerID="cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c" exitCode=143 Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.988985 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b1b9107-6ac0-4e66-bbbd-11435fac4798","Type":"ContainerDied","Data":"cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c"} Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.992493 4919 generic.go:334] "Generic (PLEG): container finished" podID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerID="914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d" exitCode=143 Jan 09 13:53:34 crc kubenswrapper[4919]: I0109 13:53:34.992555 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1002a2ff-2366-4c32-b1cd-ad66959e6c39","Type":"ContainerDied","Data":"914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d"} Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.007518 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c31d277-b08a-41e0-9f01-95ea17af82f4","Type":"ContainerStarted","Data":"963ab3d741bd092c95bef06fd8ad5b2e22d89896c7f8c1b643fb7322b1651bb8"} Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.008454 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.012265 4919 generic.go:334] "Generic (PLEG): container finished" podID="2cfde4b8-ca80-46d0-9e92-ca9102760082" containerID="1a18feb30c2766e8e0968242218506d8d38bda78e0350697e39dcc5eb675a325" exitCode=0 Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.012307 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfde4b8-ca80-46d0-9e92-ca9102760082","Type":"ContainerDied","Data":"1a18feb30c2766e8e0968242218506d8d38bda78e0350697e39dcc5eb675a325"} Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.033332 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.499759041 podStartE2EDuration="7.033312065s" podCreationTimestamp="2026-01-09 13:53:29 +0000 UTC" firstStartedPulling="2026-01-09 13:53:30.850848604 +0000 UTC m=+1390.398688054" lastFinishedPulling="2026-01-09 13:53:35.384401628 +0000 UTC m=+1394.932241078" observedRunningTime="2026-01-09 13:53:36.02582549 +0000 UTC m=+1395.573664940" watchObservedRunningTime="2026-01-09 13:53:36.033312065 +0000 UTC m=+1395.581151515" Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.438417 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.518656 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-config-data\") pod \"2cfde4b8-ca80-46d0-9e92-ca9102760082\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.518725 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-combined-ca-bundle\") pod \"2cfde4b8-ca80-46d0-9e92-ca9102760082\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.518832 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdcf4\" (UniqueName: \"kubernetes.io/projected/2cfde4b8-ca80-46d0-9e92-ca9102760082-kube-api-access-vdcf4\") pod \"2cfde4b8-ca80-46d0-9e92-ca9102760082\" (UID: \"2cfde4b8-ca80-46d0-9e92-ca9102760082\") " Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.536207 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfde4b8-ca80-46d0-9e92-ca9102760082-kube-api-access-vdcf4" (OuterVolumeSpecName: "kube-api-access-vdcf4") pod "2cfde4b8-ca80-46d0-9e92-ca9102760082" (UID: "2cfde4b8-ca80-46d0-9e92-ca9102760082"). InnerVolumeSpecName "kube-api-access-vdcf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.579437 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cfde4b8-ca80-46d0-9e92-ca9102760082" (UID: "2cfde4b8-ca80-46d0-9e92-ca9102760082"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.586933 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-config-data" (OuterVolumeSpecName: "config-data") pod "2cfde4b8-ca80-46d0-9e92-ca9102760082" (UID: "2cfde4b8-ca80-46d0-9e92-ca9102760082"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.646432 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdcf4\" (UniqueName: \"kubernetes.io/projected/2cfde4b8-ca80-46d0-9e92-ca9102760082-kube-api-access-vdcf4\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.646482 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:36 crc kubenswrapper[4919]: I0109 13:53:36.646495 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfde4b8-ca80-46d0-9e92-ca9102760082-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.028493 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfde4b8-ca80-46d0-9e92-ca9102760082","Type":"ContainerDied","Data":"27f2d40e7dab20d6f6b71106a02551e337060f5ff2b528a856817089c9f56632"} Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.028543 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.028853 4919 scope.go:117] "RemoveContainer" containerID="1a18feb30c2766e8e0968242218506d8d38bda78e0350697e39dcc5eb675a325" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.062028 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.072179 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.118508 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:53:37 crc kubenswrapper[4919]: E0109 13:53:37.119022 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c794cf2c-22d5-44dc-8bff-4bbdaca37867" containerName="nova-manage" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.119042 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c794cf2c-22d5-44dc-8bff-4bbdaca37867" containerName="nova-manage" Jan 09 13:53:37 crc kubenswrapper[4919]: E0109 13:53:37.119077 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfde4b8-ca80-46d0-9e92-ca9102760082" containerName="nova-scheduler-scheduler" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.119085 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfde4b8-ca80-46d0-9e92-ca9102760082" containerName="nova-scheduler-scheduler" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.119355 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfde4b8-ca80-46d0-9e92-ca9102760082" containerName="nova-scheduler-scheduler" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.119387 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="c794cf2c-22d5-44dc-8bff-4bbdaca37867" containerName="nova-manage" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.120166 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.122655 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.127518 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.159053 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b56a5e-6bc1-4366-87e6-81d8e4b8100b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b\") " pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.159193 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8b56a5e-6bc1-4366-87e6-81d8e4b8100b-config-data\") pod \"nova-scheduler-0\" (UID: \"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b\") " pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.159322 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226h8\" (UniqueName: \"kubernetes.io/projected/c8b56a5e-6bc1-4366-87e6-81d8e4b8100b-kube-api-access-226h8\") pod \"nova-scheduler-0\" (UID: \"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b\") " pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.260737 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8b56a5e-6bc1-4366-87e6-81d8e4b8100b-config-data\") pod \"nova-scheduler-0\" (UID: \"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b\") " pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.260803 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-226h8\" (UniqueName: \"kubernetes.io/projected/c8b56a5e-6bc1-4366-87e6-81d8e4b8100b-kube-api-access-226h8\") pod \"nova-scheduler-0\" (UID: \"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b\") " pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.260917 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b56a5e-6bc1-4366-87e6-81d8e4b8100b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b\") " pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.266585 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8b56a5e-6bc1-4366-87e6-81d8e4b8100b-config-data\") pod \"nova-scheduler-0\" (UID: \"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b\") " pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.267591 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b56a5e-6bc1-4366-87e6-81d8e4b8100b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b\") " pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.285109 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-226h8\" (UniqueName: \"kubernetes.io/projected/c8b56a5e-6bc1-4366-87e6-81d8e4b8100b-kube-api-access-226h8\") pod \"nova-scheduler-0\" (UID: \"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b\") " pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.375982 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:48002->10.217.0.197:8775: read: connection reset by peer" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.376003 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:48014->10.217.0.197:8775: read: connection reset by peer" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.447787 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.872725 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.973399 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b1b9107-6ac0-4e66-bbbd-11435fac4798-logs\") pod \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.973462 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-combined-ca-bundle\") pod \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.973689 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-config-data\") pod \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.973734 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsdrj\" (UniqueName: \"kubernetes.io/projected/8b1b9107-6ac0-4e66-bbbd-11435fac4798-kube-api-access-xsdrj\") pod \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.973796 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-nova-metadata-tls-certs\") pod \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\" (UID: \"8b1b9107-6ac0-4e66-bbbd-11435fac4798\") " Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.974009 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1b9107-6ac0-4e66-bbbd-11435fac4798-logs" (OuterVolumeSpecName: "logs") pod "8b1b9107-6ac0-4e66-bbbd-11435fac4798" (UID: "8b1b9107-6ac0-4e66-bbbd-11435fac4798"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.974945 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b1b9107-6ac0-4e66-bbbd-11435fac4798-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:37 crc kubenswrapper[4919]: I0109 13:53:37.981149 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1b9107-6ac0-4e66-bbbd-11435fac4798-kube-api-access-xsdrj" (OuterVolumeSpecName: "kube-api-access-xsdrj") pod "8b1b9107-6ac0-4e66-bbbd-11435fac4798" (UID: "8b1b9107-6ac0-4e66-bbbd-11435fac4798"). InnerVolumeSpecName "kube-api-access-xsdrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.010905 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.021102 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-config-data" (OuterVolumeSpecName: "config-data") pod "8b1b9107-6ac0-4e66-bbbd-11435fac4798" (UID: "8b1b9107-6ac0-4e66-bbbd-11435fac4798"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.022118 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b1b9107-6ac0-4e66-bbbd-11435fac4798" (UID: "8b1b9107-6ac0-4e66-bbbd-11435fac4798"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.045498 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b","Type":"ContainerStarted","Data":"73d0a4340e3ebb789312c3b2bb96bb35d505ecdb033fbc837170d3dad0c1aa50"} Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.050762 4919 generic.go:334] "Generic (PLEG): container finished" podID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerID="179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7" exitCode=0 Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.050941 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.050999 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b1b9107-6ac0-4e66-bbbd-11435fac4798","Type":"ContainerDied","Data":"179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7"} Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.051043 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b1b9107-6ac0-4e66-bbbd-11435fac4798","Type":"ContainerDied","Data":"80cdd50aa5f92f697a5746517cebbae06074c5bf690f479283800475bf7d2ef2"} Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.051066 4919 scope.go:117] "RemoveContainer" containerID="179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.070593 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8b1b9107-6ac0-4e66-bbbd-11435fac4798" (UID: "8b1b9107-6ac0-4e66-bbbd-11435fac4798"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.079977 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.080038 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsdrj\" (UniqueName: \"kubernetes.io/projected/8b1b9107-6ac0-4e66-bbbd-11435fac4798-kube-api-access-xsdrj\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.080051 4919 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.080060 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1b9107-6ac0-4e66-bbbd-11435fac4798-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.084625 4919 scope.go:117] "RemoveContainer" containerID="cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.105616 4919 scope.go:117] "RemoveContainer" containerID="179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7" Jan 09 13:53:38 crc kubenswrapper[4919]: E0109 13:53:38.106178 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7\": container with ID starting with 179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7 not found: ID does not exist" containerID="179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.106253 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7"} err="failed to get container status \"179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7\": rpc error: code = NotFound desc = could not find container \"179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7\": container with ID starting with 179a9ecc821e716887543484cbe2ad170d743ccda6550091017cd291e5872fd7 not found: ID does not exist" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.106284 4919 scope.go:117] "RemoveContainer" containerID="cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c" Jan 09 13:53:38 crc kubenswrapper[4919]: E0109 13:53:38.106877 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c\": container with ID starting with cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c not found: ID does not exist" containerID="cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.106923 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c"} err="failed to get container status \"cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c\": rpc error: code = NotFound desc = could not find container \"cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c\": container with ID starting with cf3f84382ba0ab543c78d4fe6699003014a42ef2598963df0d46947805b57b7c not found: ID does not exist" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.386936 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.397704 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.417786 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:53:38 crc kubenswrapper[4919]: E0109 13:53:38.418308 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-log" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.418333 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-log" Jan 09 13:53:38 crc kubenswrapper[4919]: E0109 13:53:38.418386 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-metadata" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.418392 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-metadata" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.418554 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-log" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.418582 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" containerName="nova-metadata-metadata" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.419762 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.421868 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.426439 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.433020 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.488786 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d389ef-fb74-406c-a1cb-8a591b708726-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.488842 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10d389ef-fb74-406c-a1cb-8a591b708726-logs\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.488916 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10d389ef-fb74-406c-a1cb-8a591b708726-config-data\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.488947 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d389ef-fb74-406c-a1cb-8a591b708726-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.489232 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftlk2\" (UniqueName: \"kubernetes.io/projected/10d389ef-fb74-406c-a1cb-8a591b708726-kube-api-access-ftlk2\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.591163 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10d389ef-fb74-406c-a1cb-8a591b708726-config-data\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.591238 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d389ef-fb74-406c-a1cb-8a591b708726-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.591305 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftlk2\" (UniqueName: \"kubernetes.io/projected/10d389ef-fb74-406c-a1cb-8a591b708726-kube-api-access-ftlk2\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.591461 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d389ef-fb74-406c-a1cb-8a591b708726-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.591528 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10d389ef-fb74-406c-a1cb-8a591b708726-logs\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.591881 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10d389ef-fb74-406c-a1cb-8a591b708726-logs\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.598468 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d389ef-fb74-406c-a1cb-8a591b708726-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.598475 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10d389ef-fb74-406c-a1cb-8a591b708726-config-data\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.600738 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10d389ef-fb74-406c-a1cb-8a591b708726-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.613830 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftlk2\" (UniqueName: \"kubernetes.io/projected/10d389ef-fb74-406c-a1cb-8a591b708726-kube-api-access-ftlk2\") pod \"nova-metadata-0\" (UID: \"10d389ef-fb74-406c-a1cb-8a591b708726\") " pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.773552 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfde4b8-ca80-46d0-9e92-ca9102760082" path="/var/lib/kubelet/pods/2cfde4b8-ca80-46d0-9e92-ca9102760082/volumes" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.774148 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1b9107-6ac0-4e66-bbbd-11435fac4798" path="/var/lib/kubelet/pods/8b1b9107-6ac0-4e66-bbbd-11435fac4798/volumes" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.863737 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 13:53:38 crc kubenswrapper[4919]: I0109 13:53:38.982738 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.068347 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c8b56a5e-6bc1-4366-87e6-81d8e4b8100b","Type":"ContainerStarted","Data":"eef2116eddfe849743c62b4ba30e1646c642dd134b00ef929c4d693e83dbd3d9"} Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.075191 4919 generic.go:334] "Generic (PLEG): container finished" podID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerID="5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2" exitCode=0 Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.075277 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1002a2ff-2366-4c32-b1cd-ad66959e6c39","Type":"ContainerDied","Data":"5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2"} Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.075310 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1002a2ff-2366-4c32-b1cd-ad66959e6c39","Type":"ContainerDied","Data":"3f6594ee1e167da4f21acf3fb5e2de15f6218eb77ac994a6cadf1769e5f29cf8"} Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.075333 4919 scope.go:117] "RemoveContainer" containerID="5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.075538 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.096565 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.09654096 podStartE2EDuration="2.09654096s" podCreationTimestamp="2026-01-09 13:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:53:39.085061926 +0000 UTC m=+1398.632901376" watchObservedRunningTime="2026-01-09 13:53:39.09654096 +0000 UTC m=+1398.644380410" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.108414 4919 scope.go:117] "RemoveContainer" containerID="914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.109928 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1002a2ff-2366-4c32-b1cd-ad66959e6c39-logs\") pod \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.110175 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-config-data\") pod \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.110328 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-internal-tls-certs\") pod \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.110448 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-combined-ca-bundle\") pod \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.110520 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbdkr\" (UniqueName: \"kubernetes.io/projected/1002a2ff-2366-4c32-b1cd-ad66959e6c39-kube-api-access-rbdkr\") pod \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.110573 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-public-tls-certs\") pod \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\" (UID: \"1002a2ff-2366-4c32-b1cd-ad66959e6c39\") " Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.115569 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1002a2ff-2366-4c32-b1cd-ad66959e6c39-logs" (OuterVolumeSpecName: "logs") pod "1002a2ff-2366-4c32-b1cd-ad66959e6c39" (UID: "1002a2ff-2366-4c32-b1cd-ad66959e6c39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.119609 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1002a2ff-2366-4c32-b1cd-ad66959e6c39-kube-api-access-rbdkr" (OuterVolumeSpecName: "kube-api-access-rbdkr") pod "1002a2ff-2366-4c32-b1cd-ad66959e6c39" (UID: "1002a2ff-2366-4c32-b1cd-ad66959e6c39"). InnerVolumeSpecName "kube-api-access-rbdkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.142721 4919 scope.go:117] "RemoveContainer" containerID="5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2" Jan 09 13:53:39 crc kubenswrapper[4919]: E0109 13:53:39.143326 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2\": container with ID starting with 5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2 not found: ID does not exist" containerID="5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.143367 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2"} err="failed to get container status \"5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2\": rpc error: code = NotFound desc = could not find container \"5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2\": container with ID starting with 5f2249369d91c8338266346fccb2f0ee17a70e741ee40268bc55773e039141f2 not found: ID does not exist" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.143402 4919 scope.go:117] "RemoveContainer" containerID="914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d" Jan 09 13:53:39 crc kubenswrapper[4919]: E0109 13:53:39.143999 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d\": container with ID starting with 914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d not found: ID does not exist" containerID="914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.144037 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d"} err="failed to get container status \"914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d\": rpc error: code = NotFound desc = could not find container \"914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d\": container with ID starting with 914dca63744881bfc9656174c423f5c91582a1abb34fb82ea4cb59d50c8b1e6d not found: ID does not exist" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.156380 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1002a2ff-2366-4c32-b1cd-ad66959e6c39" (UID: "1002a2ff-2366-4c32-b1cd-ad66959e6c39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.163039 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-config-data" (OuterVolumeSpecName: "config-data") pod "1002a2ff-2366-4c32-b1cd-ad66959e6c39" (UID: "1002a2ff-2366-4c32-b1cd-ad66959e6c39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.182667 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1002a2ff-2366-4c32-b1cd-ad66959e6c39" (UID: "1002a2ff-2366-4c32-b1cd-ad66959e6c39"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.207595 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1002a2ff-2366-4c32-b1cd-ad66959e6c39" (UID: "1002a2ff-2366-4c32-b1cd-ad66959e6c39"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.213145 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.213189 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbdkr\" (UniqueName: \"kubernetes.io/projected/1002a2ff-2366-4c32-b1cd-ad66959e6c39-kube-api-access-rbdkr\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.213198 4919 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.213224 4919 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1002a2ff-2366-4c32-b1cd-ad66959e6c39-logs\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.213233 4919 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.213242 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1002a2ff-2366-4c32-b1cd-ad66959e6c39-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.317802 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 13:53:39 crc kubenswrapper[4919]: W0109 13:53:39.319364 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10d389ef_fb74_406c_a1cb_8a591b708726.slice/crio-1521e537aacfbaa7ce9defdbaf43d9b9ba11a3f1650101620cee65768fc79083 WatchSource:0}: Error finding container 1521e537aacfbaa7ce9defdbaf43d9b9ba11a3f1650101620cee65768fc79083: Status 404 returned error can't find the container with id 1521e537aacfbaa7ce9defdbaf43d9b9ba11a3f1650101620cee65768fc79083 Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.455985 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.468383 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.481873 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:39 crc kubenswrapper[4919]: E0109 13:53:39.482449 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerName="nova-api-log" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.482471 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerName="nova-api-log" Jan 09 13:53:39 crc kubenswrapper[4919]: E0109 13:53:39.482493 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerName="nova-api-api" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.482499 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerName="nova-api-api" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.482706 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerName="nova-api-log" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.482721 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" containerName="nova-api-api" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.483823 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.490038 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.490183 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.490256 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.490304 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.630608 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.630667 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.630709 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j8h2\" (UniqueName: \"kubernetes.io/projected/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-kube-api-access-7j8h2\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.630727 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-public-tls-certs\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.630936 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-config-data\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.631024 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-logs\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.738115 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-config-data\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.738348 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-logs\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.738530 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.738809 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.738902 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-logs\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.738961 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j8h2\" (UniqueName: \"kubernetes.io/projected/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-kube-api-access-7j8h2\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.739004 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-public-tls-certs\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.742664 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-config-data\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.743372 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.743420 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.747725 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-public-tls-certs\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.760907 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j8h2\" (UniqueName: \"kubernetes.io/projected/f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a-kube-api-access-7j8h2\") pod \"nova-api-0\" (UID: \"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a\") " pod="openstack/nova-api-0" Jan 09 13:53:39 crc kubenswrapper[4919]: I0109 13:53:39.867547 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 13:53:40 crc kubenswrapper[4919]: I0109 13:53:40.106921 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10d389ef-fb74-406c-a1cb-8a591b708726","Type":"ContainerStarted","Data":"a34e009380b9ff79cf53633bc4e6ea515bb9597f68ee1abd245d10b39a4430bc"} Jan 09 13:53:40 crc kubenswrapper[4919]: I0109 13:53:40.107541 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10d389ef-fb74-406c-a1cb-8a591b708726","Type":"ContainerStarted","Data":"61a93035e1cdffc2460b022eed06da5138ca2189719767f5cf486c1d1c36c1ae"} Jan 09 13:53:40 crc kubenswrapper[4919]: I0109 13:53:40.107559 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10d389ef-fb74-406c-a1cb-8a591b708726","Type":"ContainerStarted","Data":"1521e537aacfbaa7ce9defdbaf43d9b9ba11a3f1650101620cee65768fc79083"} Jan 09 13:53:40 crc kubenswrapper[4919]: I0109 13:53:40.156359 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.156338118 podStartE2EDuration="2.156338118s" podCreationTimestamp="2026-01-09 13:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:53:40.153883557 +0000 UTC m=+1399.701723007" watchObservedRunningTime="2026-01-09 13:53:40.156338118 +0000 UTC m=+1399.704177568" Jan 09 13:53:40 crc kubenswrapper[4919]: I0109 13:53:40.389477 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 13:53:40 crc kubenswrapper[4919]: I0109 13:53:40.764991 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1002a2ff-2366-4c32-b1cd-ad66959e6c39" path="/var/lib/kubelet/pods/1002a2ff-2366-4c32-b1cd-ad66959e6c39/volumes" Jan 09 13:53:41 crc kubenswrapper[4919]: I0109 13:53:41.123456 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a","Type":"ContainerStarted","Data":"e866533f5f96eab02fd9aad4ff164b0114785e25c5af63fd9978666138f1344b"} Jan 09 13:53:41 crc kubenswrapper[4919]: I0109 13:53:41.123793 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a","Type":"ContainerStarted","Data":"485f7e6ddc8ae41223ed0a4bded196758e1becd891cf8c201e3ba6a0ffbd3e48"} Jan 09 13:53:41 crc kubenswrapper[4919]: I0109 13:53:41.123806 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a","Type":"ContainerStarted","Data":"14fdd659a4d3e508ed41a178aa979a09e1080c96056d985c2f9de84d2e72d06f"} Jan 09 13:53:41 crc kubenswrapper[4919]: I0109 13:53:41.151756 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.15173337 podStartE2EDuration="2.15173337s" podCreationTimestamp="2026-01-09 13:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:53:41.148127631 +0000 UTC m=+1400.695967081" watchObservedRunningTime="2026-01-09 13:53:41.15173337 +0000 UTC m=+1400.699572820" Jan 09 13:53:42 crc kubenswrapper[4919]: I0109 13:53:42.448420 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 13:53:43 crc kubenswrapper[4919]: I0109 13:53:43.865259 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 13:53:43 crc kubenswrapper[4919]: I0109 13:53:43.865585 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 13:53:47 crc kubenswrapper[4919]: I0109 13:53:47.448235 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 13:53:47 crc kubenswrapper[4919]: I0109 13:53:47.474120 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 13:53:48 crc kubenswrapper[4919]: I0109 13:53:48.219477 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 13:53:48 crc kubenswrapper[4919]: I0109 13:53:48.864628 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 13:53:48 crc kubenswrapper[4919]: I0109 13:53:48.866093 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 13:53:49 crc kubenswrapper[4919]: I0109 13:53:49.713888 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2fce9e5e-6aeb-486e-8d5f-29c3e01c30a7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.196:3000/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:49 crc kubenswrapper[4919]: I0109 13:53:49.868134 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 13:53:49 crc kubenswrapper[4919]: I0109 13:53:49.868177 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 13:53:49 crc kubenswrapper[4919]: I0109 13:53:49.910576 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="10d389ef-fb74-406c-a1cb-8a591b708726" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:49 crc kubenswrapper[4919]: I0109 13:53:49.910802 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="10d389ef-fb74-406c-a1cb-8a591b708726" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:50 crc kubenswrapper[4919]: I0109 13:53:50.886412 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:50 crc kubenswrapper[4919]: I0109 13:53:50.886441 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 13:53:51 crc kubenswrapper[4919]: I0109 13:53:51.246518 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:53:51 crc kubenswrapper[4919]: I0109 13:53:51.246582 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:53:51 crc kubenswrapper[4919]: I0109 13:53:51.246622 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:53:51 crc kubenswrapper[4919]: I0109 13:53:51.247509 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af3cae1993f8443bd098aec195067f6b6771b2ac3e2a3073412d7f8ae6da618e"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 13:53:51 crc kubenswrapper[4919]: I0109 13:53:51.247616 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://af3cae1993f8443bd098aec195067f6b6771b2ac3e2a3073412d7f8ae6da618e" gracePeriod=600 Jan 09 13:53:52 crc kubenswrapper[4919]: I0109 13:53:52.231739 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="af3cae1993f8443bd098aec195067f6b6771b2ac3e2a3073412d7f8ae6da618e" exitCode=0 Jan 09 13:53:52 crc kubenswrapper[4919]: I0109 13:53:52.231814 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"af3cae1993f8443bd098aec195067f6b6771b2ac3e2a3073412d7f8ae6da618e"} Jan 09 13:53:52 crc kubenswrapper[4919]: I0109 13:53:52.232313 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373"} Jan 09 13:53:52 crc kubenswrapper[4919]: I0109 13:53:52.232337 4919 scope.go:117] "RemoveContainer" containerID="c739bd50573e0da995d79681df6e33456878c7cb345ea26ee42a16e540a49209" Jan 09 13:53:58 crc kubenswrapper[4919]: I0109 13:53:58.871097 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 13:53:58 crc kubenswrapper[4919]: I0109 13:53:58.872090 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 13:53:58 crc kubenswrapper[4919]: I0109 13:53:58.879337 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 13:53:59 crc kubenswrapper[4919]: I0109 13:53:59.313015 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 13:53:59 crc kubenswrapper[4919]: I0109 13:53:59.876248 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 13:53:59 crc kubenswrapper[4919]: I0109 13:53:59.877499 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 13:53:59 crc kubenswrapper[4919]: I0109 13:53:59.883183 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 13:53:59 crc kubenswrapper[4919]: I0109 13:53:59.892314 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 13:54:00 crc kubenswrapper[4919]: I0109 13:54:00.310073 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 13:54:00 crc kubenswrapper[4919]: I0109 13:54:00.316715 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 13:54:00 crc kubenswrapper[4919]: I0109 13:54:00.346827 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 13:54:10 crc kubenswrapper[4919]: I0109 13:54:10.250582 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 13:54:11 crc kubenswrapper[4919]: I0109 13:54:11.250344 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 13:54:14 crc kubenswrapper[4919]: I0109 13:54:14.899706 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" containerName="rabbitmq" containerID="cri-o://222f92d12f874e3171295a1be715ff54bd117d9c257390ea33e6a0a69878ed79" gracePeriod=604796 Jan 09 13:54:15 crc kubenswrapper[4919]: I0109 13:54:15.505356 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9b80a84d-c869-407b-b3d2-3be828183ae5" containerName="rabbitmq" containerID="cri-o://ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c" gracePeriod=604796 Jan 09 13:54:17 crc kubenswrapper[4919]: I0109 13:54:17.642231 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 09 13:54:17 crc kubenswrapper[4919]: I0109 13:54:17.865572 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9b80a84d-c869-407b-b3d2-3be828183ae5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.456829 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k4g5s"] Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.459291 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.479562 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4g5s"] Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.517725 4919 generic.go:334] "Generic (PLEG): container finished" podID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" containerID="222f92d12f874e3171295a1be715ff54bd117d9c257390ea33e6a0a69878ed79" exitCode=0 Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.517780 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba39e0c2-1804-45a7-9dd1-2c20f229b648","Type":"ContainerDied","Data":"222f92d12f874e3171295a1be715ff54bd117d9c257390ea33e6a0a69878ed79"} Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.517840 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba39e0c2-1804-45a7-9dd1-2c20f229b648","Type":"ContainerDied","Data":"c4fad242ec9236cc0d7bbe0a9099a40b56ebf8f3b4bd792ab22edbde926aa7db"} Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.517851 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4fad242ec9236cc0d7bbe0a9099a40b56ebf8f3b4bd792ab22edbde926aa7db" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.559708 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.639003 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-utilities\") pod \"redhat-operators-k4g5s\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.639110 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-catalog-content\") pod \"redhat-operators-k4g5s\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.639244 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz9jq\" (UniqueName: \"kubernetes.io/projected/c82eda79-e18a-4be5-a01f-8d2f8267a76b-kube-api-access-nz9jq\") pod \"redhat-operators-k4g5s\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.740707 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh5gt\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-kube-api-access-xh5gt\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741072 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741102 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba39e0c2-1804-45a7-9dd1-2c20f229b648-pod-info\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741141 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-server-conf\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741166 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-tls\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741198 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-plugins-conf\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741271 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-confd\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741383 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-plugins\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741480 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba39e0c2-1804-45a7-9dd1-2c20f229b648-erlang-cookie-secret\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741511 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-config-data\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741554 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-erlang-cookie\") pod \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\" (UID: \"ba39e0c2-1804-45a7-9dd1-2c20f229b648\") " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741848 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-utilities\") pod \"redhat-operators-k4g5s\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.741916 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-catalog-content\") pod \"redhat-operators-k4g5s\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.742013 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz9jq\" (UniqueName: \"kubernetes.io/projected/c82eda79-e18a-4be5-a01f-8d2f8267a76b-kube-api-access-nz9jq\") pod \"redhat-operators-k4g5s\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.743956 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.744649 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-utilities\") pod \"redhat-operators-k4g5s\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.744731 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.744946 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.745036 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-catalog-content\") pod \"redhat-operators-k4g5s\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.753440 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.753458 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.753492 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ba39e0c2-1804-45a7-9dd1-2c20f229b648-pod-info" (OuterVolumeSpecName: "pod-info") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.756356 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-kube-api-access-xh5gt" (OuterVolumeSpecName: "kube-api-access-xh5gt") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "kube-api-access-xh5gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.774655 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba39e0c2-1804-45a7-9dd1-2c20f229b648-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.781684 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-config-data" (OuterVolumeSpecName: "config-data") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.799625 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz9jq\" (UniqueName: \"kubernetes.io/projected/c82eda79-e18a-4be5-a01f-8d2f8267a76b-kube-api-access-nz9jq\") pod \"redhat-operators-k4g5s\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.831576 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-server-conf" (OuterVolumeSpecName: "server-conf") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.844649 4919 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.844688 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh5gt\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-kube-api-access-xh5gt\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.844728 4919 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.844741 4919 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba39e0c2-1804-45a7-9dd1-2c20f229b648-pod-info\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.844753 4919 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-server-conf\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.844766 4919 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.844777 4919 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.844788 4919 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.844799 4919 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba39e0c2-1804-45a7-9dd1-2c20f229b648-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.844809 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba39e0c2-1804-45a7-9dd1-2c20f229b648-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.870935 4919 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.874028 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.937244 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ba39e0c2-1804-45a7-9dd1-2c20f229b648" (UID: "ba39e0c2-1804-45a7-9dd1-2c20f229b648"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.946613 4919 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:21 crc kubenswrapper[4919]: I0109 13:54:21.946652 4919 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba39e0c2-1804-45a7-9dd1-2c20f229b648-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.283917 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.458999 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-tls\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.459047 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-plugins\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.459105 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.459152 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-plugins-conf\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.459231 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-server-conf\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.459263 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dffvn\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-kube-api-access-dffvn\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.459325 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b80a84d-c869-407b-b3d2-3be828183ae5-pod-info\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.459363 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-erlang-cookie\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.459390 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-config-data\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.459411 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b80a84d-c869-407b-b3d2-3be828183ae5-erlang-cookie-secret\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.459479 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-confd\") pod \"9b80a84d-c869-407b-b3d2-3be828183ae5\" (UID: \"9b80a84d-c869-407b-b3d2-3be828183ae5\") " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.465862 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.470741 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.471275 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.497494 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b80a84d-c869-407b-b3d2-3be828183ae5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.506491 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.506977 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.513534 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-kube-api-access-dffvn" (OuterVolumeSpecName: "kube-api-access-dffvn") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "kube-api-access-dffvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.538618 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9b80a84d-c869-407b-b3d2-3be828183ae5-pod-info" (OuterVolumeSpecName: "pod-info") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.563525 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4g5s"] Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.565072 4919 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.565116 4919 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.565152 4919 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.565162 4919 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.565174 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dffvn\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-kube-api-access-dffvn\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.565189 4919 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b80a84d-c869-407b-b3d2-3be828183ae5-pod-info\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.565201 4919 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.565232 4919 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b80a84d-c869-407b-b3d2-3be828183ae5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.617955 4919 generic.go:334] "Generic (PLEG): container finished" podID="9b80a84d-c869-407b-b3d2-3be828183ae5" containerID="ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c" exitCode=0 Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.618429 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.619164 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.619765 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b80a84d-c869-407b-b3d2-3be828183ae5","Type":"ContainerDied","Data":"ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c"} Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.619882 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b80a84d-c869-407b-b3d2-3be828183ae5","Type":"ContainerDied","Data":"63c1a2eeb29f6e95d9d9933071e3a87e6fe6930fadce38bd36cf06c4c27b1fc4"} Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.619913 4919 scope.go:117] "RemoveContainer" containerID="ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.628685 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-config-data" (OuterVolumeSpecName: "config-data") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.635878 4919 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.653982 4919 scope.go:117] "RemoveContainer" containerID="94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.668564 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.668590 4919 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.700014 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.712843 4919 scope.go:117] "RemoveContainer" containerID="ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c" Jan 09 13:54:22 crc kubenswrapper[4919]: E0109 13:54:22.714664 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c\": container with ID starting with ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c not found: ID does not exist" containerID="ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.714692 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c"} err="failed to get container status \"ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c\": rpc error: code = NotFound desc = could not find container \"ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c\": container with ID starting with ad0f9de654816891d30cd0f0cf424ef02601e942c0e25c60a3ff325074bad81c not found: ID does not exist" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.714738 4919 scope.go:117] "RemoveContainer" containerID="94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50" Jan 09 13:54:22 crc kubenswrapper[4919]: E0109 13:54:22.715085 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50\": container with ID starting with 94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50 not found: ID does not exist" containerID="94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.715105 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50"} err="failed to get container status \"94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50\": rpc error: code = NotFound desc = could not find container \"94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50\": container with ID starting with 94957647709fe2c44cd5a70c7a2b949171bebfd17eaf58facd52a3975416fc50 not found: ID does not exist" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.734246 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.738671 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-server-conf" (OuterVolumeSpecName: "server-conf") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.743256 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 13:54:22 crc kubenswrapper[4919]: E0109 13:54:22.743697 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" containerName="setup-container" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.743709 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" containerName="setup-container" Jan 09 13:54:22 crc kubenswrapper[4919]: E0109 13:54:22.743730 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b80a84d-c869-407b-b3d2-3be828183ae5" containerName="setup-container" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.743735 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b80a84d-c869-407b-b3d2-3be828183ae5" containerName="setup-container" Jan 09 13:54:22 crc kubenswrapper[4919]: E0109 13:54:22.743753 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b80a84d-c869-407b-b3d2-3be828183ae5" containerName="rabbitmq" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.743759 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b80a84d-c869-407b-b3d2-3be828183ae5" containerName="rabbitmq" Jan 09 13:54:22 crc kubenswrapper[4919]: E0109 13:54:22.743769 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" containerName="rabbitmq" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.743775 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" containerName="rabbitmq" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.743958 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" containerName="rabbitmq" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.743970 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b80a84d-c869-407b-b3d2-3be828183ae5" containerName="rabbitmq" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.745027 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.747426 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.752056 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-n9dll" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.752948 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.753018 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.752958 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.753885 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.758234 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.764260 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba39e0c2-1804-45a7-9dd1-2c20f229b648" path="/var/lib/kubelet/pods/ba39e0c2-1804-45a7-9dd1-2c20f229b648/volumes" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.775158 4919 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b80a84d-c869-407b-b3d2-3be828183ae5-server-conf\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.779828 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.799679 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9b80a84d-c869-407b-b3d2-3be828183ae5" (UID: "9b80a84d-c869-407b-b3d2-3be828183ae5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.877888 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.877949 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7239a87a-aba2-4367-b1c3-2800f1a130d8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.877986 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7239a87a-aba2-4367-b1c3-2800f1a130d8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.878047 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7239a87a-aba2-4367-b1c3-2800f1a130d8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.878108 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5dr\" (UniqueName: \"kubernetes.io/projected/7239a87a-aba2-4367-b1c3-2800f1a130d8-kube-api-access-zg5dr\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.878466 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.878499 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.878546 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7239a87a-aba2-4367-b1c3-2800f1a130d8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.878574 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7239a87a-aba2-4367-b1c3-2800f1a130d8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.878600 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.878754 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.878823 4919 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b80a84d-c869-407b-b3d2-3be828183ae5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.963154 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.970242 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980244 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980323 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980357 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7239a87a-aba2-4367-b1c3-2800f1a130d8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980389 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7239a87a-aba2-4367-b1c3-2800f1a130d8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980435 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7239a87a-aba2-4367-b1c3-2800f1a130d8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980460 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg5dr\" (UniqueName: \"kubernetes.io/projected/7239a87a-aba2-4367-b1c3-2800f1a130d8-kube-api-access-zg5dr\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980502 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980527 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980558 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7239a87a-aba2-4367-b1c3-2800f1a130d8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980587 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7239a87a-aba2-4367-b1c3-2800f1a130d8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.980609 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.981639 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7239a87a-aba2-4367-b1c3-2800f1a130d8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.981992 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.982102 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.983269 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7239a87a-aba2-4367-b1c3-2800f1a130d8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.983513 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.984618 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7239a87a-aba2-4367-b1c3-2800f1a130d8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.985097 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.988689 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7239a87a-aba2-4367-b1c3-2800f1a130d8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.988830 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7239a87a-aba2-4367-b1c3-2800f1a130d8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:22 crc kubenswrapper[4919]: I0109 13:54:22.992043 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7239a87a-aba2-4367-b1c3-2800f1a130d8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.004838 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.017082 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg5dr\" (UniqueName: \"kubernetes.io/projected/7239a87a-aba2-4367-b1c3-2800f1a130d8-kube-api-access-zg5dr\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.017807 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.025407 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.025498 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.025758 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.025891 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.026115 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.028931 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-x76gb" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.029308 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.040617 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.077078 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7239a87a-aba2-4367-b1c3-2800f1a130d8\") " pod="openstack/rabbitmq-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.082308 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/196a3f64-983f-4369-93cf-9501a68ee8a4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.083035 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.083140 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/196a3f64-983f-4369-93cf-9501a68ee8a4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.083305 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.083486 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.083617 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/196a3f64-983f-4369-93cf-9501a68ee8a4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.083738 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.083870 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/196a3f64-983f-4369-93cf-9501a68ee8a4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.083976 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.084067 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g79fz\" (UniqueName: \"kubernetes.io/projected/196a3f64-983f-4369-93cf-9501a68ee8a4-kube-api-access-g79fz\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.084150 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/196a3f64-983f-4369-93cf-9501a68ee8a4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186241 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/196a3f64-983f-4369-93cf-9501a68ee8a4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186303 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186329 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/196a3f64-983f-4369-93cf-9501a68ee8a4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186403 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186444 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186487 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/196a3f64-983f-4369-93cf-9501a68ee8a4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186530 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186590 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/196a3f64-983f-4369-93cf-9501a68ee8a4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186624 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186642 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g79fz\" (UniqueName: \"kubernetes.io/projected/196a3f64-983f-4369-93cf-9501a68ee8a4-kube-api-access-g79fz\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.186662 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/196a3f64-983f-4369-93cf-9501a68ee8a4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.187355 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.187444 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/196a3f64-983f-4369-93cf-9501a68ee8a4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.187716 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/196a3f64-983f-4369-93cf-9501a68ee8a4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.188158 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.188999 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/196a3f64-983f-4369-93cf-9501a68ee8a4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.190193 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.192920 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/196a3f64-983f-4369-93cf-9501a68ee8a4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.193522 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.197866 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/196a3f64-983f-4369-93cf-9501a68ee8a4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.213139 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g79fz\" (UniqueName: \"kubernetes.io/projected/196a3f64-983f-4369-93cf-9501a68ee8a4-kube-api-access-g79fz\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.215279 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/196a3f64-983f-4369-93cf-9501a68ee8a4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.234627 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"196a3f64-983f-4369-93cf-9501a68ee8a4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.367132 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.371727 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.694029 4919 generic.go:334] "Generic (PLEG): container finished" podID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerID="e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b" exitCode=0 Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.695933 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4g5s" event={"ID":"c82eda79-e18a-4be5-a01f-8d2f8267a76b","Type":"ContainerDied","Data":"e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b"} Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.695963 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4g5s" event={"ID":"c82eda79-e18a-4be5-a01f-8d2f8267a76b","Type":"ContainerStarted","Data":"2d3e764cacff6d906d3c0071ae2cd89b6245b29025b5ff3c03b5efc89f7ad6fa"} Jan 09 13:54:23 crc kubenswrapper[4919]: I0109 13:54:23.883336 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 13:54:24 crc kubenswrapper[4919]: W0109 13:54:24.002070 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196a3f64_983f_4369_93cf_9501a68ee8a4.slice/crio-52ea9ede0b7a447de6eaf353113e2689b89ca9b093ec3234823da6d73ebd44cf WatchSource:0}: Error finding container 52ea9ede0b7a447de6eaf353113e2689b89ca9b093ec3234823da6d73ebd44cf: Status 404 returned error can't find the container with id 52ea9ede0b7a447de6eaf353113e2689b89ca9b093ec3234823da6d73ebd44cf Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.002907 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.095183 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8595b94875-glzm4"] Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.107070 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.109689 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.129697 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-glzm4"] Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.205442 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcms7\" (UniqueName: \"kubernetes.io/projected/37da1735-1512-4015-bb06-6babd7d92cb5-kube-api-access-wcms7\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.205550 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-config\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.205589 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-nb\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.205673 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-svc\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.205696 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-sb\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.205730 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-swift-storage-0\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.205796 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-openstack-edpm-ipam\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.307300 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcms7\" (UniqueName: \"kubernetes.io/projected/37da1735-1512-4015-bb06-6babd7d92cb5-kube-api-access-wcms7\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.307447 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-config\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.308276 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-nb\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.308372 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-config\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.308481 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-svc\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.308515 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-sb\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.308561 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-swift-storage-0\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.308595 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-openstack-edpm-ipam\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.308651 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-nb\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.309180 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-svc\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.309306 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-openstack-edpm-ipam\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.309633 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-sb\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.309802 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-swift-storage-0\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.346450 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcms7\" (UniqueName: \"kubernetes.io/projected/37da1735-1512-4015-bb06-6babd7d92cb5-kube-api-access-wcms7\") pod \"dnsmasq-dns-8595b94875-glzm4\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.475322 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.709276 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7239a87a-aba2-4367-b1c3-2800f1a130d8","Type":"ContainerStarted","Data":"b662101e1ecff41af06e7be5077f22b6aeccb41c3aa906277b042720656df771"} Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.711699 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"196a3f64-983f-4369-93cf-9501a68ee8a4","Type":"ContainerStarted","Data":"52ea9ede0b7a447de6eaf353113e2689b89ca9b093ec3234823da6d73ebd44cf"} Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.766518 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b80a84d-c869-407b-b3d2-3be828183ae5" path="/var/lib/kubelet/pods/9b80a84d-c869-407b-b3d2-3be828183ae5/volumes" Jan 09 13:54:24 crc kubenswrapper[4919]: I0109 13:54:24.956649 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-glzm4"] Jan 09 13:54:24 crc kubenswrapper[4919]: W0109 13:54:24.989095 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37da1735_1512_4015_bb06_6babd7d92cb5.slice/crio-0920516d110bfffac69666cf25f490deacae5776593a896560c506047ab8fe0d WatchSource:0}: Error finding container 0920516d110bfffac69666cf25f490deacae5776593a896560c506047ab8fe0d: Status 404 returned error can't find the container with id 0920516d110bfffac69666cf25f490deacae5776593a896560c506047ab8fe0d Jan 09 13:54:25 crc kubenswrapper[4919]: I0109 13:54:25.721256 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8595b94875-glzm4" event={"ID":"37da1735-1512-4015-bb06-6babd7d92cb5","Type":"ContainerStarted","Data":"0920516d110bfffac69666cf25f490deacae5776593a896560c506047ab8fe0d"} Jan 09 13:54:25 crc kubenswrapper[4919]: I0109 13:54:25.723578 4919 generic.go:334] "Generic (PLEG): container finished" podID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerID="87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819" exitCode=0 Jan 09 13:54:25 crc kubenswrapper[4919]: I0109 13:54:25.723612 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4g5s" event={"ID":"c82eda79-e18a-4be5-a01f-8d2f8267a76b","Type":"ContainerDied","Data":"87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819"} Jan 09 13:54:26 crc kubenswrapper[4919]: I0109 13:54:26.740368 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7239a87a-aba2-4367-b1c3-2800f1a130d8","Type":"ContainerStarted","Data":"2255ce71a969bd6f2c1d79582c3123cd2fc93dd96d4b35c929e7a60411705e75"} Jan 09 13:54:26 crc kubenswrapper[4919]: I0109 13:54:26.743145 4919 generic.go:334] "Generic (PLEG): container finished" podID="37da1735-1512-4015-bb06-6babd7d92cb5" containerID="e0a491dd84ee409d104f0d1d23e7c62f8743283ee43bf09903f010695bbbb140" exitCode=0 Jan 09 13:54:26 crc kubenswrapper[4919]: I0109 13:54:26.743191 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8595b94875-glzm4" event={"ID":"37da1735-1512-4015-bb06-6babd7d92cb5","Type":"ContainerDied","Data":"e0a491dd84ee409d104f0d1d23e7c62f8743283ee43bf09903f010695bbbb140"} Jan 09 13:54:26 crc kubenswrapper[4919]: I0109 13:54:26.747045 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"196a3f64-983f-4369-93cf-9501a68ee8a4","Type":"ContainerStarted","Data":"44ed11b42f79dc8c0bb220e2de25ca656740a3d91733e4a58af2fbea576f02c0"} Jan 09 13:54:27 crc kubenswrapper[4919]: I0109 13:54:27.762379 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8595b94875-glzm4" event={"ID":"37da1735-1512-4015-bb06-6babd7d92cb5","Type":"ContainerStarted","Data":"3a4bd57049a8d8b23d95883e02a974fb72af0eba92776e64644286a23ea930ce"} Jan 09 13:54:27 crc kubenswrapper[4919]: I0109 13:54:27.763844 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:27 crc kubenswrapper[4919]: I0109 13:54:27.765359 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4g5s" event={"ID":"c82eda79-e18a-4be5-a01f-8d2f8267a76b","Type":"ContainerStarted","Data":"1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64"} Jan 09 13:54:27 crc kubenswrapper[4919]: I0109 13:54:27.792088 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8595b94875-glzm4" podStartSLOduration=3.792063113 podStartE2EDuration="3.792063113s" podCreationTimestamp="2026-01-09 13:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:54:27.788621488 +0000 UTC m=+1447.336460938" watchObservedRunningTime="2026-01-09 13:54:27.792063113 +0000 UTC m=+1447.339902563" Jan 09 13:54:27 crc kubenswrapper[4919]: I0109 13:54:27.814974 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k4g5s" podStartSLOduration=2.99387221 podStartE2EDuration="6.814951711s" podCreationTimestamp="2026-01-09 13:54:21 +0000 UTC" firstStartedPulling="2026-01-09 13:54:23.697891597 +0000 UTC m=+1443.245731047" lastFinishedPulling="2026-01-09 13:54:27.518971098 +0000 UTC m=+1447.066810548" observedRunningTime="2026-01-09 13:54:27.810783587 +0000 UTC m=+1447.358623057" watchObservedRunningTime="2026-01-09 13:54:27.814951711 +0000 UTC m=+1447.362791161" Jan 09 13:54:31 crc kubenswrapper[4919]: I0109 13:54:31.874993 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:31 crc kubenswrapper[4919]: I0109 13:54:31.875524 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:32 crc kubenswrapper[4919]: I0109 13:54:32.925402 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4g5s" podUID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerName="registry-server" probeResult="failure" output=< Jan 09 13:54:32 crc kubenswrapper[4919]: timeout: failed to connect service ":50051" within 1s Jan 09 13:54:32 crc kubenswrapper[4919]: > Jan 09 13:54:34 crc kubenswrapper[4919]: I0109 13:54:34.477428 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:34 crc kubenswrapper[4919]: I0109 13:54:34.533018 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ksm4l"] Jan 09 13:54:34 crc kubenswrapper[4919]: I0109 13:54:34.533291 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" podUID="8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" containerName="dnsmasq-dns" containerID="cri-o://a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243" gracePeriod=10 Jan 09 13:54:34 crc kubenswrapper[4919]: E0109 13:54:34.757170 4919 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ddeed0e_3f48_4d92_84c5_d2f9535eeb0e.slice/crio-a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243.scope\": RecentStats: unable to find data in memory cache]" Jan 09 13:54:35 crc kubenswrapper[4919]: I0109 13:54:35.153059 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" podUID="8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: connect: connection refused" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.268761 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d7b79b84c-mbtbk"] Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.270819 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.293022 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d7b79b84c-mbtbk"] Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.457648 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-config\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.457693 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-dns-svc\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.457863 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.457894 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-ovsdbserver-sb\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.458003 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q79sq\" (UniqueName: \"kubernetes.io/projected/35d091b1-8210-4d82-bde9-2b14bcfb8227-kube-api-access-q79sq\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.458059 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-ovsdbserver-nb\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.458078 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-dns-swift-storage-0\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.560188 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q79sq\" (UniqueName: \"kubernetes.io/projected/35d091b1-8210-4d82-bde9-2b14bcfb8227-kube-api-access-q79sq\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.560605 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-ovsdbserver-nb\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.560625 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-dns-swift-storage-0\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.560654 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-config\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.560679 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-dns-svc\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.560739 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.560765 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-ovsdbserver-sb\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.561934 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.561959 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-ovsdbserver-sb\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.562059 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-config\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.562247 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-ovsdbserver-nb\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.562270 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-dns-swift-storage-0\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.562570 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d091b1-8210-4d82-bde9-2b14bcfb8227-dns-svc\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.588878 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q79sq\" (UniqueName: \"kubernetes.io/projected/35d091b1-8210-4d82-bde9-2b14bcfb8227-kube-api-access-q79sq\") pod \"dnsmasq-dns-d7b79b84c-mbtbk\" (UID: \"35d091b1-8210-4d82-bde9-2b14bcfb8227\") " pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.589384 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.841458 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.874415 4919 generic.go:334] "Generic (PLEG): container finished" podID="8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" containerID="a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243" exitCode=0 Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.874477 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" event={"ID":"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e","Type":"ContainerDied","Data":"a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243"} Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.874509 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.874529 4919 scope.go:117] "RemoveContainer" containerID="a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.874513 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-ksm4l" event={"ID":"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e","Type":"ContainerDied","Data":"648d65549488cccab17af8ce162249db2a720ddf117a399846395e3e15274a81"} Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.901487 4919 scope.go:117] "RemoveContainer" containerID="551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.936508 4919 scope.go:117] "RemoveContainer" containerID="a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243" Jan 09 13:54:36 crc kubenswrapper[4919]: E0109 13:54:36.937532 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243\": container with ID starting with a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243 not found: ID does not exist" containerID="a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.937594 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243"} err="failed to get container status \"a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243\": rpc error: code = NotFound desc = could not find container \"a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243\": container with ID starting with a4613bbaba2979cc772bb68e85a69febce72a66c55af1a66c8f291f65ae58243 not found: ID does not exist" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.937629 4919 scope.go:117] "RemoveContainer" containerID="551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995" Jan 09 13:54:36 crc kubenswrapper[4919]: E0109 13:54:36.938009 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995\": container with ID starting with 551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995 not found: ID does not exist" containerID="551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.938074 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995"} err="failed to get container status \"551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995\": rpc error: code = NotFound desc = could not find container \"551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995\": container with ID starting with 551c38d2aa7cee3833dbbabf647ca4d9e93443818bc301168f4b5051affda995 not found: ID does not exist" Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.971670 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-svc\") pod \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.971811 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpdp8\" (UniqueName: \"kubernetes.io/projected/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-kube-api-access-bpdp8\") pod \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.971902 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-nb\") pod \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.971946 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-sb\") pod \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.971998 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-swift-storage-0\") pod \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.972448 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-config\") pod \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\" (UID: \"8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e\") " Jan 09 13:54:36 crc kubenswrapper[4919]: I0109 13:54:36.978687 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-kube-api-access-bpdp8" (OuterVolumeSpecName: "kube-api-access-bpdp8") pod "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" (UID: "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e"). InnerVolumeSpecName "kube-api-access-bpdp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.024086 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" (UID: "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.026505 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" (UID: "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.027734 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" (UID: "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.033804 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" (UID: "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.039242 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-config" (OuterVolumeSpecName: "config") pod "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" (UID: "8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.074870 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpdp8\" (UniqueName: \"kubernetes.io/projected/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-kube-api-access-bpdp8\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.074917 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.074929 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.074938 4919 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.074948 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.074957 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:37 crc kubenswrapper[4919]: W0109 13:54:37.114204 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35d091b1_8210_4d82_bde9_2b14bcfb8227.slice/crio-16ee56613c5dfe62eaa4b4f0723e7e6e853dfeb3b88f72e133f3c9b93678af2d WatchSource:0}: Error finding container 16ee56613c5dfe62eaa4b4f0723e7e6e853dfeb3b88f72e133f3c9b93678af2d: Status 404 returned error can't find the container with id 16ee56613c5dfe62eaa4b4f0723e7e6e853dfeb3b88f72e133f3c9b93678af2d Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.114673 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d7b79b84c-mbtbk"] Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.212228 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ksm4l"] Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.229134 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ksm4l"] Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.888122 4919 generic.go:334] "Generic (PLEG): container finished" podID="35d091b1-8210-4d82-bde9-2b14bcfb8227" containerID="93981d6f8cc42348363723f5d6cd91f7441a14d936abce5ca1f13317b764f7ec" exitCode=0 Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.888179 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" event={"ID":"35d091b1-8210-4d82-bde9-2b14bcfb8227","Type":"ContainerDied","Data":"93981d6f8cc42348363723f5d6cd91f7441a14d936abce5ca1f13317b764f7ec"} Jan 09 13:54:37 crc kubenswrapper[4919]: I0109 13:54:37.888233 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" event={"ID":"35d091b1-8210-4d82-bde9-2b14bcfb8227","Type":"ContainerStarted","Data":"16ee56613c5dfe62eaa4b4f0723e7e6e853dfeb3b88f72e133f3c9b93678af2d"} Jan 09 13:54:38 crc kubenswrapper[4919]: I0109 13:54:38.763298 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" path="/var/lib/kubelet/pods/8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e/volumes" Jan 09 13:54:38 crc kubenswrapper[4919]: I0109 13:54:38.899365 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" event={"ID":"35d091b1-8210-4d82-bde9-2b14bcfb8227","Type":"ContainerStarted","Data":"9fb0140189fac9ef219a95ebdfc6221ec1023283e9ebaf64cf15f32ccaf00ebb"} Jan 09 13:54:38 crc kubenswrapper[4919]: I0109 13:54:38.899526 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:38 crc kubenswrapper[4919]: I0109 13:54:38.920550 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" podStartSLOduration=2.920530054 podStartE2EDuration="2.920530054s" podCreationTimestamp="2026-01-09 13:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:54:38.917467058 +0000 UTC m=+1458.465306538" watchObservedRunningTime="2026-01-09 13:54:38.920530054 +0000 UTC m=+1458.468369514" Jan 09 13:54:41 crc kubenswrapper[4919]: I0109 13:54:41.921085 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:41 crc kubenswrapper[4919]: I0109 13:54:41.974100 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:42 crc kubenswrapper[4919]: I0109 13:54:42.158755 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4g5s"] Jan 09 13:54:43 crc kubenswrapper[4919]: I0109 13:54:43.951573 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k4g5s" podUID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerName="registry-server" containerID="cri-o://1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64" gracePeriod=2 Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.399156 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.521768 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-catalog-content\") pod \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.521851 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-utilities\") pod \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.521898 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz9jq\" (UniqueName: \"kubernetes.io/projected/c82eda79-e18a-4be5-a01f-8d2f8267a76b-kube-api-access-nz9jq\") pod \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\" (UID: \"c82eda79-e18a-4be5-a01f-8d2f8267a76b\") " Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.522730 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-utilities" (OuterVolumeSpecName: "utilities") pod "c82eda79-e18a-4be5-a01f-8d2f8267a76b" (UID: "c82eda79-e18a-4be5-a01f-8d2f8267a76b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.524517 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.529791 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82eda79-e18a-4be5-a01f-8d2f8267a76b-kube-api-access-nz9jq" (OuterVolumeSpecName: "kube-api-access-nz9jq") pod "c82eda79-e18a-4be5-a01f-8d2f8267a76b" (UID: "c82eda79-e18a-4be5-a01f-8d2f8267a76b"). InnerVolumeSpecName "kube-api-access-nz9jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.626353 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz9jq\" (UniqueName: \"kubernetes.io/projected/c82eda79-e18a-4be5-a01f-8d2f8267a76b-kube-api-access-nz9jq\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.630780 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c82eda79-e18a-4be5-a01f-8d2f8267a76b" (UID: "c82eda79-e18a-4be5-a01f-8d2f8267a76b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.728782 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82eda79-e18a-4be5-a01f-8d2f8267a76b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.961137 4919 generic.go:334] "Generic (PLEG): container finished" podID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerID="1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64" exitCode=0 Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.961183 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4g5s" event={"ID":"c82eda79-e18a-4be5-a01f-8d2f8267a76b","Type":"ContainerDied","Data":"1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64"} Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.961227 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4g5s" event={"ID":"c82eda79-e18a-4be5-a01f-8d2f8267a76b","Type":"ContainerDied","Data":"2d3e764cacff6d906d3c0071ae2cd89b6245b29025b5ff3c03b5efc89f7ad6fa"} Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.961244 4919 scope.go:117] "RemoveContainer" containerID="1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64" Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.961241 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4g5s" Jan 09 13:54:44 crc kubenswrapper[4919]: I0109 13:54:44.986449 4919 scope.go:117] "RemoveContainer" containerID="87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819" Jan 09 13:54:45 crc kubenswrapper[4919]: I0109 13:54:45.012993 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4g5s"] Jan 09 13:54:45 crc kubenswrapper[4919]: I0109 13:54:45.018374 4919 scope.go:117] "RemoveContainer" containerID="e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b" Jan 09 13:54:45 crc kubenswrapper[4919]: I0109 13:54:45.018771 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k4g5s"] Jan 09 13:54:45 crc kubenswrapper[4919]: I0109 13:54:45.071912 4919 scope.go:117] "RemoveContainer" containerID="1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64" Jan 09 13:54:45 crc kubenswrapper[4919]: E0109 13:54:45.072582 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64\": container with ID starting with 1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64 not found: ID does not exist" containerID="1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64" Jan 09 13:54:45 crc kubenswrapper[4919]: I0109 13:54:45.072687 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64"} err="failed to get container status \"1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64\": rpc error: code = NotFound desc = could not find container \"1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64\": container with ID starting with 1cbb3879d0b604ff3ac156e4e2ca6820cd6a51de0a79e390809c2b09ce177b64 not found: ID does not exist" Jan 09 13:54:45 crc kubenswrapper[4919]: I0109 13:54:45.072780 4919 scope.go:117] "RemoveContainer" containerID="87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819" Jan 09 13:54:45 crc kubenswrapper[4919]: E0109 13:54:45.073324 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819\": container with ID starting with 87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819 not found: ID does not exist" containerID="87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819" Jan 09 13:54:45 crc kubenswrapper[4919]: I0109 13:54:45.073426 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819"} err="failed to get container status \"87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819\": rpc error: code = NotFound desc = could not find container \"87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819\": container with ID starting with 87df31d4922ceef79f2371ff5f85d67ed4d0ecd8ec060f1a5efc4025240b9819 not found: ID does not exist" Jan 09 13:54:45 crc kubenswrapper[4919]: I0109 13:54:45.073506 4919 scope.go:117] "RemoveContainer" containerID="e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b" Jan 09 13:54:45 crc kubenswrapper[4919]: E0109 13:54:45.073844 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b\": container with ID starting with e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b not found: ID does not exist" containerID="e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b" Jan 09 13:54:45 crc kubenswrapper[4919]: I0109 13:54:45.073886 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b"} err="failed to get container status \"e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b\": rpc error: code = NotFound desc = could not find container \"e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b\": container with ID starting with e431b474754233b61e631b6994032925094b992b2a0c35c0368cbad3fb80123b not found: ID does not exist" Jan 09 13:54:46 crc kubenswrapper[4919]: I0109 13:54:46.591385 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d7b79b84c-mbtbk" Jan 09 13:54:46 crc kubenswrapper[4919]: I0109 13:54:46.664379 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-glzm4"] Jan 09 13:54:46 crc kubenswrapper[4919]: I0109 13:54:46.664787 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8595b94875-glzm4" podUID="37da1735-1512-4015-bb06-6babd7d92cb5" containerName="dnsmasq-dns" containerID="cri-o://3a4bd57049a8d8b23d95883e02a974fb72af0eba92776e64644286a23ea930ce" gracePeriod=10 Jan 09 13:54:46 crc kubenswrapper[4919]: I0109 13:54:46.763370 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" path="/var/lib/kubelet/pods/c82eda79-e18a-4be5-a01f-8d2f8267a76b/volumes" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.021814 4919 generic.go:334] "Generic (PLEG): container finished" podID="37da1735-1512-4015-bb06-6babd7d92cb5" containerID="3a4bd57049a8d8b23d95883e02a974fb72af0eba92776e64644286a23ea930ce" exitCode=0 Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.021872 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8595b94875-glzm4" event={"ID":"37da1735-1512-4015-bb06-6babd7d92cb5","Type":"ContainerDied","Data":"3a4bd57049a8d8b23d95883e02a974fb72af0eba92776e64644286a23ea930ce"} Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.259854 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.390676 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-openstack-edpm-ipam\") pod \"37da1735-1512-4015-bb06-6babd7d92cb5\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.390711 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcms7\" (UniqueName: \"kubernetes.io/projected/37da1735-1512-4015-bb06-6babd7d92cb5-kube-api-access-wcms7\") pod \"37da1735-1512-4015-bb06-6babd7d92cb5\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.390829 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-config\") pod \"37da1735-1512-4015-bb06-6babd7d92cb5\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.390867 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-sb\") pod \"37da1735-1512-4015-bb06-6babd7d92cb5\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.391032 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-swift-storage-0\") pod \"37da1735-1512-4015-bb06-6babd7d92cb5\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.391060 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-nb\") pod \"37da1735-1512-4015-bb06-6babd7d92cb5\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.391084 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-svc\") pod \"37da1735-1512-4015-bb06-6babd7d92cb5\" (UID: \"37da1735-1512-4015-bb06-6babd7d92cb5\") " Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.398860 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37da1735-1512-4015-bb06-6babd7d92cb5-kube-api-access-wcms7" (OuterVolumeSpecName: "kube-api-access-wcms7") pod "37da1735-1512-4015-bb06-6babd7d92cb5" (UID: "37da1735-1512-4015-bb06-6babd7d92cb5"). InnerVolumeSpecName "kube-api-access-wcms7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.443608 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "37da1735-1512-4015-bb06-6babd7d92cb5" (UID: "37da1735-1512-4015-bb06-6babd7d92cb5"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.444657 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37da1735-1512-4015-bb06-6babd7d92cb5" (UID: "37da1735-1512-4015-bb06-6babd7d92cb5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.447139 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-config" (OuterVolumeSpecName: "config") pod "37da1735-1512-4015-bb06-6babd7d92cb5" (UID: "37da1735-1512-4015-bb06-6babd7d92cb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.452951 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "37da1735-1512-4015-bb06-6babd7d92cb5" (UID: "37da1735-1512-4015-bb06-6babd7d92cb5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.466097 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37da1735-1512-4015-bb06-6babd7d92cb5" (UID: "37da1735-1512-4015-bb06-6babd7d92cb5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.468905 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37da1735-1512-4015-bb06-6babd7d92cb5" (UID: "37da1735-1512-4015-bb06-6babd7d92cb5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.494057 4919 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.494094 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.494107 4919 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.494115 4919 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.494124 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcms7\" (UniqueName: \"kubernetes.io/projected/37da1735-1512-4015-bb06-6babd7d92cb5-kube-api-access-wcms7\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.494133 4919 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-config\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:47 crc kubenswrapper[4919]: I0109 13:54:47.494141 4919 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37da1735-1512-4015-bb06-6babd7d92cb5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 13:54:48 crc kubenswrapper[4919]: I0109 13:54:48.032442 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8595b94875-glzm4" event={"ID":"37da1735-1512-4015-bb06-6babd7d92cb5","Type":"ContainerDied","Data":"0920516d110bfffac69666cf25f490deacae5776593a896560c506047ab8fe0d"} Jan 09 13:54:48 crc kubenswrapper[4919]: I0109 13:54:48.032733 4919 scope.go:117] "RemoveContainer" containerID="3a4bd57049a8d8b23d95883e02a974fb72af0eba92776e64644286a23ea930ce" Jan 09 13:54:48 crc kubenswrapper[4919]: I0109 13:54:48.032530 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-glzm4" Jan 09 13:54:48 crc kubenswrapper[4919]: I0109 13:54:48.068818 4919 scope.go:117] "RemoveContainer" containerID="e0a491dd84ee409d104f0d1d23e7c62f8743283ee43bf09903f010695bbbb140" Jan 09 13:54:48 crc kubenswrapper[4919]: I0109 13:54:48.080750 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-glzm4"] Jan 09 13:54:48 crc kubenswrapper[4919]: I0109 13:54:48.089823 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-glzm4"] Jan 09 13:54:48 crc kubenswrapper[4919]: I0109 13:54:48.763119 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37da1735-1512-4015-bb06-6babd7d92cb5" path="/var/lib/kubelet/pods/37da1735-1512-4015-bb06-6babd7d92cb5/volumes" Jan 09 13:54:58 crc kubenswrapper[4919]: I0109 13:54:58.119948 4919 generic.go:334] "Generic (PLEG): container finished" podID="7239a87a-aba2-4367-b1c3-2800f1a130d8" containerID="2255ce71a969bd6f2c1d79582c3123cd2fc93dd96d4b35c929e7a60411705e75" exitCode=0 Jan 09 13:54:58 crc kubenswrapper[4919]: I0109 13:54:58.120034 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7239a87a-aba2-4367-b1c3-2800f1a130d8","Type":"ContainerDied","Data":"2255ce71a969bd6f2c1d79582c3123cd2fc93dd96d4b35c929e7a60411705e75"} Jan 09 13:54:58 crc kubenswrapper[4919]: I0109 13:54:58.124167 4919 generic.go:334] "Generic (PLEG): container finished" podID="196a3f64-983f-4369-93cf-9501a68ee8a4" containerID="44ed11b42f79dc8c0bb220e2de25ca656740a3d91733e4a58af2fbea576f02c0" exitCode=0 Jan 09 13:54:58 crc kubenswrapper[4919]: I0109 13:54:58.124204 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"196a3f64-983f-4369-93cf-9501a68ee8a4","Type":"ContainerDied","Data":"44ed11b42f79dc8c0bb220e2de25ca656740a3d91733e4a58af2fbea576f02c0"} Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.138125 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"196a3f64-983f-4369-93cf-9501a68ee8a4","Type":"ContainerStarted","Data":"07828308cd1010bd12d076a82aef497ec6a4cc3f7ed66567d074f17f951a5150"} Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.139822 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.140559 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7239a87a-aba2-4367-b1c3-2800f1a130d8","Type":"ContainerStarted","Data":"10bc8076ffffe5568ec60729f47099a7599bb4ec0855101900707748ba884419"} Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.140806 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.170467 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.170446615 podStartE2EDuration="37.170446615s" podCreationTimestamp="2026-01-09 13:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:54:59.159525644 +0000 UTC m=+1478.707365104" watchObservedRunningTime="2026-01-09 13:54:59.170446615 +0000 UTC m=+1478.718286065" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.191177 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.191157218 podStartE2EDuration="37.191157218s" podCreationTimestamp="2026-01-09 13:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 13:54:59.186230586 +0000 UTC m=+1478.734070056" watchObservedRunningTime="2026-01-09 13:54:59.191157218 +0000 UTC m=+1478.738996668" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.271573 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj"] Jan 09 13:54:59 crc kubenswrapper[4919]: E0109 13:54:59.272381 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerName="extract-utilities" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.272473 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerName="extract-utilities" Jan 09 13:54:59 crc kubenswrapper[4919]: E0109 13:54:59.272556 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerName="extract-content" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.272615 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerName="extract-content" Jan 09 13:54:59 crc kubenswrapper[4919]: E0109 13:54:59.272690 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" containerName="dnsmasq-dns" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.272760 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" containerName="dnsmasq-dns" Jan 09 13:54:59 crc kubenswrapper[4919]: E0109 13:54:59.272838 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" containerName="init" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.272903 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" containerName="init" Jan 09 13:54:59 crc kubenswrapper[4919]: E0109 13:54:59.272974 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37da1735-1512-4015-bb06-6babd7d92cb5" containerName="dnsmasq-dns" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.273040 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="37da1735-1512-4015-bb06-6babd7d92cb5" containerName="dnsmasq-dns" Jan 09 13:54:59 crc kubenswrapper[4919]: E0109 13:54:59.273113 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37da1735-1512-4015-bb06-6babd7d92cb5" containerName="init" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.273181 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="37da1735-1512-4015-bb06-6babd7d92cb5" containerName="init" Jan 09 13:54:59 crc kubenswrapper[4919]: E0109 13:54:59.273288 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerName="registry-server" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.277416 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerName="registry-server" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.278073 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82eda79-e18a-4be5-a01f-8d2f8267a76b" containerName="registry-server" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.278182 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="37da1735-1512-4015-bb06-6babd7d92cb5" containerName="dnsmasq-dns" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.278268 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ddeed0e-3f48-4d92-84c5-d2f9535eeb0e" containerName="dnsmasq-dns" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.279201 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.282363 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.282740 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.283145 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.283430 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.283667 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj"] Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.320859 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.321000 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zznt7\" (UniqueName: \"kubernetes.io/projected/6ff771e7-314f-493f-b5e8-fe2eb503aa52-kube-api-access-zznt7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.321171 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.321435 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.423731 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zznt7\" (UniqueName: \"kubernetes.io/projected/6ff771e7-314f-493f-b5e8-fe2eb503aa52-kube-api-access-zznt7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.423798 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.423865 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.423893 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.432239 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.432726 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.437519 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.446148 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zznt7\" (UniqueName: \"kubernetes.io/projected/6ff771e7-314f-493f-b5e8-fe2eb503aa52-kube-api-access-zznt7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:54:59 crc kubenswrapper[4919]: I0109 13:54:59.606759 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:55:00 crc kubenswrapper[4919]: I0109 13:55:00.166509 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj"] Jan 09 13:55:01 crc kubenswrapper[4919]: I0109 13:55:01.162676 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" event={"ID":"6ff771e7-314f-493f-b5e8-fe2eb503aa52","Type":"ContainerStarted","Data":"42c618b61b797903cde4ee7b3f09bdf3a12d7978a1cd2e0c94ffbb279f3ad31d"} Jan 09 13:55:13 crc kubenswrapper[4919]: I0109 13:55:13.370114 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="196a3f64-983f-4369-93cf-9501a68ee8a4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.212:5671: connect: connection refused" Jan 09 13:55:13 crc kubenswrapper[4919]: I0109 13:55:13.373474 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7239a87a-aba2-4367-b1c3-2800f1a130d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.211:5671: connect: connection refused" Jan 09 13:55:14 crc kubenswrapper[4919]: I0109 13:55:14.301509 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" event={"ID":"6ff771e7-314f-493f-b5e8-fe2eb503aa52","Type":"ContainerStarted","Data":"7b822d3d46387f5988631574e095fb563ca5b49111b5fc4e1d9cabec28072a0c"} Jan 09 13:55:14 crc kubenswrapper[4919]: I0109 13:55:14.330253 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" podStartSLOduration=2.113225708 podStartE2EDuration="15.330216974s" podCreationTimestamp="2026-01-09 13:54:59 +0000 UTC" firstStartedPulling="2026-01-09 13:55:00.175325791 +0000 UTC m=+1479.723165241" lastFinishedPulling="2026-01-09 13:55:13.392317057 +0000 UTC m=+1492.940156507" observedRunningTime="2026-01-09 13:55:14.319536609 +0000 UTC m=+1493.867376069" watchObservedRunningTime="2026-01-09 13:55:14.330216974 +0000 UTC m=+1493.878056414" Jan 09 13:55:23 crc kubenswrapper[4919]: I0109 13:55:23.369448 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 09 13:55:23 crc kubenswrapper[4919]: I0109 13:55:23.373368 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 09 13:55:25 crc kubenswrapper[4919]: I0109 13:55:25.470755 4919 generic.go:334] "Generic (PLEG): container finished" podID="6ff771e7-314f-493f-b5e8-fe2eb503aa52" containerID="7b822d3d46387f5988631574e095fb563ca5b49111b5fc4e1d9cabec28072a0c" exitCode=0 Jan 09 13:55:25 crc kubenswrapper[4919]: I0109 13:55:25.470825 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" event={"ID":"6ff771e7-314f-493f-b5e8-fe2eb503aa52","Type":"ContainerDied","Data":"7b822d3d46387f5988631574e095fb563ca5b49111b5fc4e1d9cabec28072a0c"} Jan 09 13:55:26 crc kubenswrapper[4919]: I0109 13:55:26.891487 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.057839 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-repo-setup-combined-ca-bundle\") pod \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.058010 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-ssh-key-openstack-edpm-ipam\") pod \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.058048 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zznt7\" (UniqueName: \"kubernetes.io/projected/6ff771e7-314f-493f-b5e8-fe2eb503aa52-kube-api-access-zznt7\") pod \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.058118 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-inventory\") pod \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\" (UID: \"6ff771e7-314f-493f-b5e8-fe2eb503aa52\") " Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.064512 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff771e7-314f-493f-b5e8-fe2eb503aa52-kube-api-access-zznt7" (OuterVolumeSpecName: "kube-api-access-zznt7") pod "6ff771e7-314f-493f-b5e8-fe2eb503aa52" (UID: "6ff771e7-314f-493f-b5e8-fe2eb503aa52"). InnerVolumeSpecName "kube-api-access-zznt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.065513 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6ff771e7-314f-493f-b5e8-fe2eb503aa52" (UID: "6ff771e7-314f-493f-b5e8-fe2eb503aa52"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.090620 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-inventory" (OuterVolumeSpecName: "inventory") pod "6ff771e7-314f-493f-b5e8-fe2eb503aa52" (UID: "6ff771e7-314f-493f-b5e8-fe2eb503aa52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.091003 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6ff771e7-314f-493f-b5e8-fe2eb503aa52" (UID: "6ff771e7-314f-493f-b5e8-fe2eb503aa52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.160827 4919 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.160881 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.160892 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zznt7\" (UniqueName: \"kubernetes.io/projected/6ff771e7-314f-493f-b5e8-fe2eb503aa52-kube-api-access-zznt7\") on node \"crc\" DevicePath \"\"" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.160902 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff771e7-314f-493f-b5e8-fe2eb503aa52-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.512168 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" event={"ID":"6ff771e7-314f-493f-b5e8-fe2eb503aa52","Type":"ContainerDied","Data":"42c618b61b797903cde4ee7b3f09bdf3a12d7978a1cd2e0c94ffbb279f3ad31d"} Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.512252 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42c618b61b797903cde4ee7b3f09bdf3a12d7978a1cd2e0c94ffbb279f3ad31d" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.512327 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.573377 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m"] Jan 09 13:55:27 crc kubenswrapper[4919]: E0109 13:55:27.574000 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff771e7-314f-493f-b5e8-fe2eb503aa52" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.574027 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff771e7-314f-493f-b5e8-fe2eb503aa52" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.574352 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff771e7-314f-493f-b5e8-fe2eb503aa52" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.575274 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.578246 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.578421 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.578537 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.579541 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.583505 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m"] Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.672981 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djw4r\" (UniqueName: \"kubernetes.io/projected/167890d2-4e03-4537-a339-d4efc3b64c54-kube-api-access-djw4r\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ghk4m\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.673237 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ghk4m\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.673327 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ghk4m\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.775649 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ghk4m\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.775759 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ghk4m\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.775817 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djw4r\" (UniqueName: \"kubernetes.io/projected/167890d2-4e03-4537-a339-d4efc3b64c54-kube-api-access-djw4r\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ghk4m\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.784478 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ghk4m\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.787868 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ghk4m\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.793498 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djw4r\" (UniqueName: \"kubernetes.io/projected/167890d2-4e03-4537-a339-d4efc3b64c54-kube-api-access-djw4r\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ghk4m\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:27 crc kubenswrapper[4919]: I0109 13:55:27.897920 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:28 crc kubenswrapper[4919]: I0109 13:55:28.397754 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m"] Jan 09 13:55:28 crc kubenswrapper[4919]: I0109 13:55:28.521533 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" event={"ID":"167890d2-4e03-4537-a339-d4efc3b64c54","Type":"ContainerStarted","Data":"8143ee5cb1877b54b1651003ad4e6dbd592feb2c1c8fb31cb5cf5b0e5cf37fd9"} Jan 09 13:55:29 crc kubenswrapper[4919]: I0109 13:55:29.533384 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" event={"ID":"167890d2-4e03-4537-a339-d4efc3b64c54","Type":"ContainerStarted","Data":"e736890063a8e448500cb97c2b779245b511986894ae8ca5d675fdbdd1f1fd83"} Jan 09 13:55:29 crc kubenswrapper[4919]: I0109 13:55:29.563709 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" podStartSLOduration=1.8293169740000002 podStartE2EDuration="2.563676128s" podCreationTimestamp="2026-01-09 13:55:27 +0000 UTC" firstStartedPulling="2026-01-09 13:55:28.404764675 +0000 UTC m=+1507.952604135" lastFinishedPulling="2026-01-09 13:55:29.139123839 +0000 UTC m=+1508.686963289" observedRunningTime="2026-01-09 13:55:29.553175768 +0000 UTC m=+1509.101015218" watchObservedRunningTime="2026-01-09 13:55:29.563676128 +0000 UTC m=+1509.111515588" Jan 09 13:55:32 crc kubenswrapper[4919]: I0109 13:55:32.562472 4919 generic.go:334] "Generic (PLEG): container finished" podID="167890d2-4e03-4537-a339-d4efc3b64c54" containerID="e736890063a8e448500cb97c2b779245b511986894ae8ca5d675fdbdd1f1fd83" exitCode=0 Jan 09 13:55:32 crc kubenswrapper[4919]: I0109 13:55:32.562540 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" event={"ID":"167890d2-4e03-4537-a339-d4efc3b64c54","Type":"ContainerDied","Data":"e736890063a8e448500cb97c2b779245b511986894ae8ca5d675fdbdd1f1fd83"} Jan 09 13:55:33 crc kubenswrapper[4919]: I0109 13:55:33.999238 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.101739 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djw4r\" (UniqueName: \"kubernetes.io/projected/167890d2-4e03-4537-a339-d4efc3b64c54-kube-api-access-djw4r\") pod \"167890d2-4e03-4537-a339-d4efc3b64c54\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.111075 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167890d2-4e03-4537-a339-d4efc3b64c54-kube-api-access-djw4r" (OuterVolumeSpecName: "kube-api-access-djw4r") pod "167890d2-4e03-4537-a339-d4efc3b64c54" (UID: "167890d2-4e03-4537-a339-d4efc3b64c54"). InnerVolumeSpecName "kube-api-access-djw4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.204196 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-ssh-key-openstack-edpm-ipam\") pod \"167890d2-4e03-4537-a339-d4efc3b64c54\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.204383 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-inventory\") pod \"167890d2-4e03-4537-a339-d4efc3b64c54\" (UID: \"167890d2-4e03-4537-a339-d4efc3b64c54\") " Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.204956 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djw4r\" (UniqueName: \"kubernetes.io/projected/167890d2-4e03-4537-a339-d4efc3b64c54-kube-api-access-djw4r\") on node \"crc\" DevicePath \"\"" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.229495 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-inventory" (OuterVolumeSpecName: "inventory") pod "167890d2-4e03-4537-a339-d4efc3b64c54" (UID: "167890d2-4e03-4537-a339-d4efc3b64c54"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.235548 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "167890d2-4e03-4537-a339-d4efc3b64c54" (UID: "167890d2-4e03-4537-a339-d4efc3b64c54"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.307239 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.307289 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/167890d2-4e03-4537-a339-d4efc3b64c54-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.583165 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" event={"ID":"167890d2-4e03-4537-a339-d4efc3b64c54","Type":"ContainerDied","Data":"8143ee5cb1877b54b1651003ad4e6dbd592feb2c1c8fb31cb5cf5b0e5cf37fd9"} Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.583250 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ghk4m" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.583321 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8143ee5cb1877b54b1651003ad4e6dbd592feb2c1c8fb31cb5cf5b0e5cf37fd9" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.660142 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2"] Jan 09 13:55:34 crc kubenswrapper[4919]: E0109 13:55:34.660678 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="167890d2-4e03-4537-a339-d4efc3b64c54" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.660701 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="167890d2-4e03-4537-a339-d4efc3b64c54" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.660958 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="167890d2-4e03-4537-a339-d4efc3b64c54" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.661807 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.666235 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.666324 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.666425 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.666567 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.676401 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2"] Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.714292 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v42m\" (UniqueName: \"kubernetes.io/projected/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-kube-api-access-9v42m\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.714584 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.714799 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.714932 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.816952 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.817036 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.817128 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v42m\" (UniqueName: \"kubernetes.io/projected/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-kube-api-access-9v42m\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.817346 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.821940 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.822372 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.822991 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.835048 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v42m\" (UniqueName: \"kubernetes.io/projected/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-kube-api-access-9v42m\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:34 crc kubenswrapper[4919]: I0109 13:55:34.980959 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:55:35 crc kubenswrapper[4919]: I0109 13:55:35.508710 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2"] Jan 09 13:55:35 crc kubenswrapper[4919]: I0109 13:55:35.594887 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" event={"ID":"2e1540e3-6358-48ae-ac2a-08e90ab54cbb","Type":"ContainerStarted","Data":"dcced499fcee0b8b850ee452f8c82d62ebb2272084b3b2f4a558ace01ac9e16c"} Jan 09 13:55:36 crc kubenswrapper[4919]: I0109 13:55:36.604894 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" event={"ID":"2e1540e3-6358-48ae-ac2a-08e90ab54cbb","Type":"ContainerStarted","Data":"045d503fca7670ba567a9b9a8ed7906ed35c99e674e5380e13ffc8eaff2f24b0"} Jan 09 13:55:40 crc kubenswrapper[4919]: I0109 13:55:40.962472 4919 scope.go:117] "RemoveContainer" containerID="aeb591faed3e8b6661b2747d68f4d7f79c02dfcd1a7759b2fe97932028f3c862" Jan 09 13:55:40 crc kubenswrapper[4919]: I0109 13:55:40.991433 4919 scope.go:117] "RemoveContainer" containerID="589d5a36f7cf41ba69a03c03f167fb5b087bd8d2e6a305c6bf38d6413aeba7b7" Jan 09 13:55:41 crc kubenswrapper[4919]: I0109 13:55:41.040561 4919 scope.go:117] "RemoveContainer" containerID="222f92d12f874e3171295a1be715ff54bd117d9c257390ea33e6a0a69878ed79" Jan 09 13:55:41 crc kubenswrapper[4919]: I0109 13:55:41.080850 4919 scope.go:117] "RemoveContainer" containerID="ebc3eab3b4b440ac2f45579817c76128a2c10aaf7855792351571edea4bc8f19" Jan 09 13:55:41 crc kubenswrapper[4919]: I0109 13:55:41.115443 4919 scope.go:117] "RemoveContainer" containerID="8e62e07d8838b74d93dec6dfc9405411a380785641ee444eba17e70aa209a104" Jan 09 13:55:51 crc kubenswrapper[4919]: I0109 13:55:51.247545 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:55:51 crc kubenswrapper[4919]: I0109 13:55:51.248110 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:56:21 crc kubenswrapper[4919]: I0109 13:56:21.247193 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:56:21 crc kubenswrapper[4919]: I0109 13:56:21.247817 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:56:41 crc kubenswrapper[4919]: I0109 13:56:41.271794 4919 scope.go:117] "RemoveContainer" containerID="a974e1b0b8b35d347a49d6613ab474f56e61bc73e57bd8b5aa30ed40ee5a2991" Jan 09 13:56:51 crc kubenswrapper[4919]: I0109 13:56:51.247194 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 13:56:51 crc kubenswrapper[4919]: I0109 13:56:51.247740 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 13:56:51 crc kubenswrapper[4919]: I0109 13:56:51.247791 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 13:56:51 crc kubenswrapper[4919]: I0109 13:56:51.248596 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 13:56:51 crc kubenswrapper[4919]: I0109 13:56:51.248665 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" gracePeriod=600 Jan 09 13:56:51 crc kubenswrapper[4919]: E0109 13:56:51.370796 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:56:52 crc kubenswrapper[4919]: I0109 13:56:52.311144 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" exitCode=0 Jan 09 13:56:52 crc kubenswrapper[4919]: I0109 13:56:52.311292 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373"} Jan 09 13:56:52 crc kubenswrapper[4919]: I0109 13:56:52.311608 4919 scope.go:117] "RemoveContainer" containerID="af3cae1993f8443bd098aec195067f6b6771b2ac3e2a3073412d7f8ae6da618e" Jan 09 13:56:52 crc kubenswrapper[4919]: I0109 13:56:52.312558 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:56:52 crc kubenswrapper[4919]: E0109 13:56:52.312985 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:56:52 crc kubenswrapper[4919]: I0109 13:56:52.331327 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" podStartSLOduration=77.821365632 podStartE2EDuration="1m18.331305415s" podCreationTimestamp="2026-01-09 13:55:34 +0000 UTC" firstStartedPulling="2026-01-09 13:55:35.527629922 +0000 UTC m=+1515.075469372" lastFinishedPulling="2026-01-09 13:55:36.037569705 +0000 UTC m=+1515.585409155" observedRunningTime="2026-01-09 13:55:36.629122512 +0000 UTC m=+1516.176961972" watchObservedRunningTime="2026-01-09 13:56:52.331305415 +0000 UTC m=+1591.879144885" Jan 09 13:56:52 crc kubenswrapper[4919]: I0109 13:56:52.853044 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q5drm"] Jan 09 13:56:52 crc kubenswrapper[4919]: I0109 13:56:52.855062 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:52 crc kubenswrapper[4919]: I0109 13:56:52.864253 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5drm"] Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.020968 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-catalog-content\") pod \"certified-operators-q5drm\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.021198 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6zz\" (UniqueName: \"kubernetes.io/projected/cae1e993-adc2-4d9c-abad-84ab5f41f98b-kube-api-access-pm6zz\") pod \"certified-operators-q5drm\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.021261 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-utilities\") pod \"certified-operators-q5drm\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.123389 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-catalog-content\") pod \"certified-operators-q5drm\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.123547 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm6zz\" (UniqueName: \"kubernetes.io/projected/cae1e993-adc2-4d9c-abad-84ab5f41f98b-kube-api-access-pm6zz\") pod \"certified-operators-q5drm\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.123582 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-utilities\") pod \"certified-operators-q5drm\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.124049 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-catalog-content\") pod \"certified-operators-q5drm\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.124101 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-utilities\") pod \"certified-operators-q5drm\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.146745 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm6zz\" (UniqueName: \"kubernetes.io/projected/cae1e993-adc2-4d9c-abad-84ab5f41f98b-kube-api-access-pm6zz\") pod \"certified-operators-q5drm\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.175060 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:56:53 crc kubenswrapper[4919]: I0109 13:56:53.677157 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5drm"] Jan 09 13:56:54 crc kubenswrapper[4919]: I0109 13:56:54.364976 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5drm" event={"ID":"cae1e993-adc2-4d9c-abad-84ab5f41f98b","Type":"ContainerDied","Data":"627a8fb06a34215f1dadde4f0886acbc69cbee6c4fdaeb8ec912e9fea22582c4"} Jan 09 13:56:54 crc kubenswrapper[4919]: I0109 13:56:54.364764 4919 generic.go:334] "Generic (PLEG): container finished" podID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerID="627a8fb06a34215f1dadde4f0886acbc69cbee6c4fdaeb8ec912e9fea22582c4" exitCode=0 Jan 09 13:56:54 crc kubenswrapper[4919]: I0109 13:56:54.366511 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 13:56:54 crc kubenswrapper[4919]: I0109 13:56:54.366236 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5drm" event={"ID":"cae1e993-adc2-4d9c-abad-84ab5f41f98b","Type":"ContainerStarted","Data":"2f3eb877495ea643d3e462c09fa5e4a3c0d2b5d718b43c491532e73b1988755a"} Jan 09 13:56:56 crc kubenswrapper[4919]: I0109 13:56:56.393277 4919 generic.go:334] "Generic (PLEG): container finished" podID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerID="6e0f9279898decc685ab38244f7be1df02fb16f6efa33382ef317559297c720c" exitCode=0 Jan 09 13:56:56 crc kubenswrapper[4919]: I0109 13:56:56.393376 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5drm" event={"ID":"cae1e993-adc2-4d9c-abad-84ab5f41f98b","Type":"ContainerDied","Data":"6e0f9279898decc685ab38244f7be1df02fb16f6efa33382ef317559297c720c"} Jan 09 13:56:59 crc kubenswrapper[4919]: I0109 13:56:59.422821 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5drm" event={"ID":"cae1e993-adc2-4d9c-abad-84ab5f41f98b","Type":"ContainerStarted","Data":"005d4ea544517843af0aab73e4af2c40eabbe0253e0c5814a62c2f1dff417817"} Jan 09 13:56:59 crc kubenswrapper[4919]: I0109 13:56:59.441007 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q5drm" podStartSLOduration=3.601999401 podStartE2EDuration="7.440985155s" podCreationTimestamp="2026-01-09 13:56:52 +0000 UTC" firstStartedPulling="2026-01-09 13:56:54.366257094 +0000 UTC m=+1593.914096544" lastFinishedPulling="2026-01-09 13:56:58.205242848 +0000 UTC m=+1597.753082298" observedRunningTime="2026-01-09 13:56:59.440022841 +0000 UTC m=+1598.987862301" watchObservedRunningTime="2026-01-09 13:56:59.440985155 +0000 UTC m=+1598.988824605" Jan 09 13:57:03 crc kubenswrapper[4919]: I0109 13:57:03.175332 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:57:03 crc kubenswrapper[4919]: I0109 13:57:03.175700 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:57:03 crc kubenswrapper[4919]: I0109 13:57:03.220471 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:57:03 crc kubenswrapper[4919]: I0109 13:57:03.501202 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:57:03 crc kubenswrapper[4919]: I0109 13:57:03.565189 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5drm"] Jan 09 13:57:05 crc kubenswrapper[4919]: I0109 13:57:05.474395 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q5drm" podUID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerName="registry-server" containerID="cri-o://005d4ea544517843af0aab73e4af2c40eabbe0253e0c5814a62c2f1dff417817" gracePeriod=2 Jan 09 13:57:05 crc kubenswrapper[4919]: I0109 13:57:05.751938 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:57:05 crc kubenswrapper[4919]: E0109 13:57:05.752506 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.485478 4919 generic.go:334] "Generic (PLEG): container finished" podID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerID="005d4ea544517843af0aab73e4af2c40eabbe0253e0c5814a62c2f1dff417817" exitCode=0 Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.485566 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5drm" event={"ID":"cae1e993-adc2-4d9c-abad-84ab5f41f98b","Type":"ContainerDied","Data":"005d4ea544517843af0aab73e4af2c40eabbe0253e0c5814a62c2f1dff417817"} Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.485833 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5drm" event={"ID":"cae1e993-adc2-4d9c-abad-84ab5f41f98b","Type":"ContainerDied","Data":"2f3eb877495ea643d3e462c09fa5e4a3c0d2b5d718b43c491532e73b1988755a"} Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.485851 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f3eb877495ea643d3e462c09fa5e4a3c0d2b5d718b43c491532e73b1988755a" Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.540767 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.698328 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-utilities\") pod \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.698389 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-catalog-content\") pod \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.698422 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm6zz\" (UniqueName: \"kubernetes.io/projected/cae1e993-adc2-4d9c-abad-84ab5f41f98b-kube-api-access-pm6zz\") pod \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\" (UID: \"cae1e993-adc2-4d9c-abad-84ab5f41f98b\") " Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.699186 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-utilities" (OuterVolumeSpecName: "utilities") pod "cae1e993-adc2-4d9c-abad-84ab5f41f98b" (UID: "cae1e993-adc2-4d9c-abad-84ab5f41f98b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.705255 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae1e993-adc2-4d9c-abad-84ab5f41f98b-kube-api-access-pm6zz" (OuterVolumeSpecName: "kube-api-access-pm6zz") pod "cae1e993-adc2-4d9c-abad-84ab5f41f98b" (UID: "cae1e993-adc2-4d9c-abad-84ab5f41f98b"). InnerVolumeSpecName "kube-api-access-pm6zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.746976 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cae1e993-adc2-4d9c-abad-84ab5f41f98b" (UID: "cae1e993-adc2-4d9c-abad-84ab5f41f98b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.801789 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.801835 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae1e993-adc2-4d9c-abad-84ab5f41f98b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:57:06 crc kubenswrapper[4919]: I0109 13:57:06.801856 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm6zz\" (UniqueName: \"kubernetes.io/projected/cae1e993-adc2-4d9c-abad-84ab5f41f98b-kube-api-access-pm6zz\") on node \"crc\" DevicePath \"\"" Jan 09 13:57:07 crc kubenswrapper[4919]: I0109 13:57:07.497103 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5drm" Jan 09 13:57:07 crc kubenswrapper[4919]: I0109 13:57:07.527038 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5drm"] Jan 09 13:57:07 crc kubenswrapper[4919]: I0109 13:57:07.535996 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q5drm"] Jan 09 13:57:08 crc kubenswrapper[4919]: I0109 13:57:08.765090 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" path="/var/lib/kubelet/pods/cae1e993-adc2-4d9c-abad-84ab5f41f98b/volumes" Jan 09 13:57:20 crc kubenswrapper[4919]: I0109 13:57:20.758704 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:57:20 crc kubenswrapper[4919]: E0109 13:57:20.760743 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.381790 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gc8v5"] Jan 09 13:57:30 crc kubenswrapper[4919]: E0109 13:57:30.382817 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerName="extract-content" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.382834 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerName="extract-content" Jan 09 13:57:30 crc kubenswrapper[4919]: E0109 13:57:30.382846 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerName="extract-utilities" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.382854 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerName="extract-utilities" Jan 09 13:57:30 crc kubenswrapper[4919]: E0109 13:57:30.382864 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerName="registry-server" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.382871 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerName="registry-server" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.383073 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae1e993-adc2-4d9c-abad-84ab5f41f98b" containerName="registry-server" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.384879 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.398578 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gc8v5"] Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.490343 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqfwp\" (UniqueName: \"kubernetes.io/projected/3276fb70-feef-42ad-9e54-61fc34aa42b9-kube-api-access-vqfwp\") pod \"redhat-marketplace-gc8v5\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.490447 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-catalog-content\") pod \"redhat-marketplace-gc8v5\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.490525 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-utilities\") pod \"redhat-marketplace-gc8v5\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.592500 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqfwp\" (UniqueName: \"kubernetes.io/projected/3276fb70-feef-42ad-9e54-61fc34aa42b9-kube-api-access-vqfwp\") pod \"redhat-marketplace-gc8v5\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.592595 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-catalog-content\") pod \"redhat-marketplace-gc8v5\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.592659 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-utilities\") pod \"redhat-marketplace-gc8v5\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.593093 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-utilities\") pod \"redhat-marketplace-gc8v5\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.593235 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-catalog-content\") pod \"redhat-marketplace-gc8v5\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.614188 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqfwp\" (UniqueName: \"kubernetes.io/projected/3276fb70-feef-42ad-9e54-61fc34aa42b9-kube-api-access-vqfwp\") pod \"redhat-marketplace-gc8v5\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:30 crc kubenswrapper[4919]: I0109 13:57:30.751546 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:31 crc kubenswrapper[4919]: I0109 13:57:31.260319 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gc8v5"] Jan 09 13:57:31 crc kubenswrapper[4919]: W0109 13:57:31.268059 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3276fb70_feef_42ad_9e54_61fc34aa42b9.slice/crio-956cef78d1a2fe0e472e7dd84a18d725026a56fe986261f760d16ff2a0211e52 WatchSource:0}: Error finding container 956cef78d1a2fe0e472e7dd84a18d725026a56fe986261f760d16ff2a0211e52: Status 404 returned error can't find the container with id 956cef78d1a2fe0e472e7dd84a18d725026a56fe986261f760d16ff2a0211e52 Jan 09 13:57:31 crc kubenswrapper[4919]: I0109 13:57:31.720317 4919 generic.go:334] "Generic (PLEG): container finished" podID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerID="7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3" exitCode=0 Jan 09 13:57:31 crc kubenswrapper[4919]: I0109 13:57:31.720385 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gc8v5" event={"ID":"3276fb70-feef-42ad-9e54-61fc34aa42b9","Type":"ContainerDied","Data":"7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3"} Jan 09 13:57:31 crc kubenswrapper[4919]: I0109 13:57:31.720629 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gc8v5" event={"ID":"3276fb70-feef-42ad-9e54-61fc34aa42b9","Type":"ContainerStarted","Data":"956cef78d1a2fe0e472e7dd84a18d725026a56fe986261f760d16ff2a0211e52"} Jan 09 13:57:32 crc kubenswrapper[4919]: I0109 13:57:32.752154 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:57:32 crc kubenswrapper[4919]: E0109 13:57:32.752621 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:57:33 crc kubenswrapper[4919]: I0109 13:57:33.739340 4919 generic.go:334] "Generic (PLEG): container finished" podID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerID="ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3" exitCode=0 Jan 09 13:57:33 crc kubenswrapper[4919]: I0109 13:57:33.739630 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gc8v5" event={"ID":"3276fb70-feef-42ad-9e54-61fc34aa42b9","Type":"ContainerDied","Data":"ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3"} Jan 09 13:57:34 crc kubenswrapper[4919]: I0109 13:57:34.762340 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gc8v5" event={"ID":"3276fb70-feef-42ad-9e54-61fc34aa42b9","Type":"ContainerStarted","Data":"b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13"} Jan 09 13:57:34 crc kubenswrapper[4919]: I0109 13:57:34.781579 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gc8v5" podStartSLOduration=2.174749139 podStartE2EDuration="4.781535343s" podCreationTimestamp="2026-01-09 13:57:30 +0000 UTC" firstStartedPulling="2026-01-09 13:57:31.722788676 +0000 UTC m=+1631.270628126" lastFinishedPulling="2026-01-09 13:57:34.32957488 +0000 UTC m=+1633.877414330" observedRunningTime="2026-01-09 13:57:34.774263352 +0000 UTC m=+1634.322102812" watchObservedRunningTime="2026-01-09 13:57:34.781535343 +0000 UTC m=+1634.329374793" Jan 09 13:57:40 crc kubenswrapper[4919]: I0109 13:57:40.764516 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:40 crc kubenswrapper[4919]: I0109 13:57:40.765165 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:40 crc kubenswrapper[4919]: I0109 13:57:40.806659 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:40 crc kubenswrapper[4919]: I0109 13:57:40.858123 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:41 crc kubenswrapper[4919]: I0109 13:57:41.043765 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gc8v5"] Jan 09 13:57:42 crc kubenswrapper[4919]: I0109 13:57:42.827855 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gc8v5" podUID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerName="registry-server" containerID="cri-o://b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13" gracePeriod=2 Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.397557 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.454582 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-catalog-content\") pod \"3276fb70-feef-42ad-9e54-61fc34aa42b9\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.454785 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqfwp\" (UniqueName: \"kubernetes.io/projected/3276fb70-feef-42ad-9e54-61fc34aa42b9-kube-api-access-vqfwp\") pod \"3276fb70-feef-42ad-9e54-61fc34aa42b9\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.455034 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-utilities\") pod \"3276fb70-feef-42ad-9e54-61fc34aa42b9\" (UID: \"3276fb70-feef-42ad-9e54-61fc34aa42b9\") " Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.456647 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-utilities" (OuterVolumeSpecName: "utilities") pod "3276fb70-feef-42ad-9e54-61fc34aa42b9" (UID: "3276fb70-feef-42ad-9e54-61fc34aa42b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.463391 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3276fb70-feef-42ad-9e54-61fc34aa42b9-kube-api-access-vqfwp" (OuterVolumeSpecName: "kube-api-access-vqfwp") pod "3276fb70-feef-42ad-9e54-61fc34aa42b9" (UID: "3276fb70-feef-42ad-9e54-61fc34aa42b9"). InnerVolumeSpecName "kube-api-access-vqfwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.481232 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3276fb70-feef-42ad-9e54-61fc34aa42b9" (UID: "3276fb70-feef-42ad-9e54-61fc34aa42b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.557247 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqfwp\" (UniqueName: \"kubernetes.io/projected/3276fb70-feef-42ad-9e54-61fc34aa42b9-kube-api-access-vqfwp\") on node \"crc\" DevicePath \"\"" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.557300 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.557315 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3276fb70-feef-42ad-9e54-61fc34aa42b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.841816 4919 generic.go:334] "Generic (PLEG): container finished" podID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerID="b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13" exitCode=0 Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.841862 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gc8v5" event={"ID":"3276fb70-feef-42ad-9e54-61fc34aa42b9","Type":"ContainerDied","Data":"b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13"} Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.841892 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gc8v5" event={"ID":"3276fb70-feef-42ad-9e54-61fc34aa42b9","Type":"ContainerDied","Data":"956cef78d1a2fe0e472e7dd84a18d725026a56fe986261f760d16ff2a0211e52"} Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.841909 4919 scope.go:117] "RemoveContainer" containerID="b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.841916 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gc8v5" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.869106 4919 scope.go:117] "RemoveContainer" containerID="ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.882032 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gc8v5"] Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.891874 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gc8v5"] Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.902503 4919 scope.go:117] "RemoveContainer" containerID="7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.948813 4919 scope.go:117] "RemoveContainer" containerID="b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13" Jan 09 13:57:43 crc kubenswrapper[4919]: E0109 13:57:43.949287 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13\": container with ID starting with b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13 not found: ID does not exist" containerID="b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.949332 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13"} err="failed to get container status \"b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13\": rpc error: code = NotFound desc = could not find container \"b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13\": container with ID starting with b44017ec1f7d209d221cb54616506f05fad654413c82d959a563e8b686565a13 not found: ID does not exist" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.949360 4919 scope.go:117] "RemoveContainer" containerID="ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3" Jan 09 13:57:43 crc kubenswrapper[4919]: E0109 13:57:43.949700 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3\": container with ID starting with ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3 not found: ID does not exist" containerID="ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.949747 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3"} err="failed to get container status \"ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3\": rpc error: code = NotFound desc = could not find container \"ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3\": container with ID starting with ba64dbe408b7abafd8d944b4e165bacf9ca7fc2d1f786fb45913a3bfc59215e3 not found: ID does not exist" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.949775 4919 scope.go:117] "RemoveContainer" containerID="7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3" Jan 09 13:57:43 crc kubenswrapper[4919]: E0109 13:57:43.950052 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3\": container with ID starting with 7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3 not found: ID does not exist" containerID="7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3" Jan 09 13:57:43 crc kubenswrapper[4919]: I0109 13:57:43.950091 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3"} err="failed to get container status \"7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3\": rpc error: code = NotFound desc = could not find container \"7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3\": container with ID starting with 7a983fd89ea21f442e0373fb991b3d5e6bed0fcdbf0bfd83efbb386cc3e355f3 not found: ID does not exist" Jan 09 13:57:44 crc kubenswrapper[4919]: I0109 13:57:44.771091 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3276fb70-feef-42ad-9e54-61fc34aa42b9" path="/var/lib/kubelet/pods/3276fb70-feef-42ad-9e54-61fc34aa42b9/volumes" Jan 09 13:57:46 crc kubenswrapper[4919]: I0109 13:57:46.752426 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:57:46 crc kubenswrapper[4919]: E0109 13:57:46.753052 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:58:00 crc kubenswrapper[4919]: I0109 13:58:00.760296 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:58:00 crc kubenswrapper[4919]: E0109 13:58:00.761290 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:58:13 crc kubenswrapper[4919]: I0109 13:58:13.751740 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:58:13 crc kubenswrapper[4919]: E0109 13:58:13.752447 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.683693 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nv246"] Jan 09 13:58:24 crc kubenswrapper[4919]: E0109 13:58:24.684569 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerName="registry-server" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.684580 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerName="registry-server" Jan 09 13:58:24 crc kubenswrapper[4919]: E0109 13:58:24.684632 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerName="extract-utilities" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.684638 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerName="extract-utilities" Jan 09 13:58:24 crc kubenswrapper[4919]: E0109 13:58:24.684648 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerName="extract-content" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.684655 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerName="extract-content" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.684867 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="3276fb70-feef-42ad-9e54-61fc34aa42b9" containerName="registry-server" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.686645 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.700655 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nv246"] Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.806558 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2kg\" (UniqueName: \"kubernetes.io/projected/678068f7-bf03-493b-85f3-b52db3ea6770-kube-api-access-8s2kg\") pod \"community-operators-nv246\" (UID: \"678068f7-bf03-493b-85f3-b52db3ea6770\") " pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.806634 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678068f7-bf03-493b-85f3-b52db3ea6770-utilities\") pod \"community-operators-nv246\" (UID: \"678068f7-bf03-493b-85f3-b52db3ea6770\") " pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.806801 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678068f7-bf03-493b-85f3-b52db3ea6770-catalog-content\") pod \"community-operators-nv246\" (UID: \"678068f7-bf03-493b-85f3-b52db3ea6770\") " pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.909393 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s2kg\" (UniqueName: \"kubernetes.io/projected/678068f7-bf03-493b-85f3-b52db3ea6770-kube-api-access-8s2kg\") pod \"community-operators-nv246\" (UID: \"678068f7-bf03-493b-85f3-b52db3ea6770\") " pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.909470 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678068f7-bf03-493b-85f3-b52db3ea6770-utilities\") pod \"community-operators-nv246\" (UID: \"678068f7-bf03-493b-85f3-b52db3ea6770\") " pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.909676 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678068f7-bf03-493b-85f3-b52db3ea6770-catalog-content\") pod \"community-operators-nv246\" (UID: \"678068f7-bf03-493b-85f3-b52db3ea6770\") " pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.910731 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678068f7-bf03-493b-85f3-b52db3ea6770-catalog-content\") pod \"community-operators-nv246\" (UID: \"678068f7-bf03-493b-85f3-b52db3ea6770\") " pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.910871 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678068f7-bf03-493b-85f3-b52db3ea6770-utilities\") pod \"community-operators-nv246\" (UID: \"678068f7-bf03-493b-85f3-b52db3ea6770\") " pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:24 crc kubenswrapper[4919]: I0109 13:58:24.944734 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s2kg\" (UniqueName: \"kubernetes.io/projected/678068f7-bf03-493b-85f3-b52db3ea6770-kube-api-access-8s2kg\") pod \"community-operators-nv246\" (UID: \"678068f7-bf03-493b-85f3-b52db3ea6770\") " pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:25 crc kubenswrapper[4919]: I0109 13:58:25.008819 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:25 crc kubenswrapper[4919]: I0109 13:58:25.595349 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nv246"] Jan 09 13:58:26 crc kubenswrapper[4919]: I0109 13:58:26.251947 4919 generic.go:334] "Generic (PLEG): container finished" podID="678068f7-bf03-493b-85f3-b52db3ea6770" containerID="1149aced85e2dff300394c7c109238111bf436ecaafc2fc3bf169ac078e815d9" exitCode=0 Jan 09 13:58:26 crc kubenswrapper[4919]: I0109 13:58:26.252010 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv246" event={"ID":"678068f7-bf03-493b-85f3-b52db3ea6770","Type":"ContainerDied","Data":"1149aced85e2dff300394c7c109238111bf436ecaafc2fc3bf169ac078e815d9"} Jan 09 13:58:26 crc kubenswrapper[4919]: I0109 13:58:26.252070 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv246" event={"ID":"678068f7-bf03-493b-85f3-b52db3ea6770","Type":"ContainerStarted","Data":"eea24490434724cd306dbab49deeeee60c858fdae14348579f68c4df5ad32812"} Jan 09 13:58:28 crc kubenswrapper[4919]: I0109 13:58:28.752610 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:58:28 crc kubenswrapper[4919]: E0109 13:58:28.753386 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:58:35 crc kubenswrapper[4919]: I0109 13:58:35.392125 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv246" event={"ID":"678068f7-bf03-493b-85f3-b52db3ea6770","Type":"ContainerStarted","Data":"d16e55e2a5d1e6d71dad8bf2785f8a269c0574910caef422e2a28f69520664c4"} Jan 09 13:58:36 crc kubenswrapper[4919]: I0109 13:58:36.402736 4919 generic.go:334] "Generic (PLEG): container finished" podID="678068f7-bf03-493b-85f3-b52db3ea6770" containerID="d16e55e2a5d1e6d71dad8bf2785f8a269c0574910caef422e2a28f69520664c4" exitCode=0 Jan 09 13:58:36 crc kubenswrapper[4919]: I0109 13:58:36.402831 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv246" event={"ID":"678068f7-bf03-493b-85f3-b52db3ea6770","Type":"ContainerDied","Data":"d16e55e2a5d1e6d71dad8bf2785f8a269c0574910caef422e2a28f69520664c4"} Jan 09 13:58:39 crc kubenswrapper[4919]: I0109 13:58:39.752515 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:58:39 crc kubenswrapper[4919]: E0109 13:58:39.753686 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:58:43 crc kubenswrapper[4919]: I0109 13:58:43.497198 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv246" event={"ID":"678068f7-bf03-493b-85f3-b52db3ea6770","Type":"ContainerStarted","Data":"12e0663ca343d7b5d9ea1730c12b417a3d13682979ada86720551bf7191ee6a7"} Jan 09 13:58:43 crc kubenswrapper[4919]: I0109 13:58:43.523094 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nv246" podStartSLOduration=3.795070264 podStartE2EDuration="19.523071788s" podCreationTimestamp="2026-01-09 13:58:24 +0000 UTC" firstStartedPulling="2026-01-09 13:58:26.25438461 +0000 UTC m=+1685.802224060" lastFinishedPulling="2026-01-09 13:58:41.982386134 +0000 UTC m=+1701.530225584" observedRunningTime="2026-01-09 13:58:43.514913875 +0000 UTC m=+1703.062753345" watchObservedRunningTime="2026-01-09 13:58:43.523071788 +0000 UTC m=+1703.070911238" Jan 09 13:58:45 crc kubenswrapper[4919]: I0109 13:58:45.010452 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:45 crc kubenswrapper[4919]: I0109 13:58:45.010812 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:45 crc kubenswrapper[4919]: I0109 13:58:45.057573 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:54 crc kubenswrapper[4919]: I0109 13:58:54.751567 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:58:54 crc kubenswrapper[4919]: E0109 13:58:54.752305 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:58:55 crc kubenswrapper[4919]: I0109 13:58:55.074627 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nv246" Jan 09 13:58:55 crc kubenswrapper[4919]: I0109 13:58:55.164368 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nv246"] Jan 09 13:58:55 crc kubenswrapper[4919]: I0109 13:58:55.223732 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-crxrx"] Jan 09 13:58:55 crc kubenswrapper[4919]: I0109 13:58:55.225001 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-crxrx" podUID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerName="registry-server" containerID="cri-o://931ac0034b628e0797fa1e0345aafea2cb48f06495f87c6475c0bf36f242ad42" gracePeriod=2 Jan 09 13:58:56 crc kubenswrapper[4919]: I0109 13:58:56.605751 4919 generic.go:334] "Generic (PLEG): container finished" podID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerID="931ac0034b628e0797fa1e0345aafea2cb48f06495f87c6475c0bf36f242ad42" exitCode=0 Jan 09 13:58:56 crc kubenswrapper[4919]: I0109 13:58:56.605834 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crxrx" event={"ID":"1969176c-2e40-4b30-9364-994e7f6d99e2","Type":"ContainerDied","Data":"931ac0034b628e0797fa1e0345aafea2cb48f06495f87c6475c0bf36f242ad42"} Jan 09 13:58:57 crc kubenswrapper[4919]: I0109 13:58:57.733799 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:58:57 crc kubenswrapper[4919]: I0109 13:58:57.830444 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-catalog-content\") pod \"1969176c-2e40-4b30-9364-994e7f6d99e2\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " Jan 09 13:58:57 crc kubenswrapper[4919]: I0109 13:58:57.830562 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slhdc\" (UniqueName: \"kubernetes.io/projected/1969176c-2e40-4b30-9364-994e7f6d99e2-kube-api-access-slhdc\") pod \"1969176c-2e40-4b30-9364-994e7f6d99e2\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " Jan 09 13:58:57 crc kubenswrapper[4919]: I0109 13:58:57.830762 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-utilities\") pod \"1969176c-2e40-4b30-9364-994e7f6d99e2\" (UID: \"1969176c-2e40-4b30-9364-994e7f6d99e2\") " Jan 09 13:58:57 crc kubenswrapper[4919]: I0109 13:58:57.831091 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-utilities" (OuterVolumeSpecName: "utilities") pod "1969176c-2e40-4b30-9364-994e7f6d99e2" (UID: "1969176c-2e40-4b30-9364-994e7f6d99e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:58:57 crc kubenswrapper[4919]: I0109 13:58:57.831618 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 13:58:57 crc kubenswrapper[4919]: I0109 13:58:57.836190 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1969176c-2e40-4b30-9364-994e7f6d99e2-kube-api-access-slhdc" (OuterVolumeSpecName: "kube-api-access-slhdc") pod "1969176c-2e40-4b30-9364-994e7f6d99e2" (UID: "1969176c-2e40-4b30-9364-994e7f6d99e2"). InnerVolumeSpecName "kube-api-access-slhdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:58:57 crc kubenswrapper[4919]: I0109 13:58:57.883303 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1969176c-2e40-4b30-9364-994e7f6d99e2" (UID: "1969176c-2e40-4b30-9364-994e7f6d99e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 13:58:57 crc kubenswrapper[4919]: I0109 13:58:57.933752 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1969176c-2e40-4b30-9364-994e7f6d99e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 13:58:57 crc kubenswrapper[4919]: I0109 13:58:57.933785 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slhdc\" (UniqueName: \"kubernetes.io/projected/1969176c-2e40-4b30-9364-994e7f6d99e2-kube-api-access-slhdc\") on node \"crc\" DevicePath \"\"" Jan 09 13:58:58 crc kubenswrapper[4919]: I0109 13:58:58.624946 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crxrx" event={"ID":"1969176c-2e40-4b30-9364-994e7f6d99e2","Type":"ContainerDied","Data":"642ab87587b342f2f9e963ccc600685e73d2a546ce5cef82fbfbe6a4ebadc3da"} Jan 09 13:58:58 crc kubenswrapper[4919]: I0109 13:58:58.624992 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crxrx" Jan 09 13:58:58 crc kubenswrapper[4919]: I0109 13:58:58.625316 4919 scope.go:117] "RemoveContainer" containerID="931ac0034b628e0797fa1e0345aafea2cb48f06495f87c6475c0bf36f242ad42" Jan 09 13:58:58 crc kubenswrapper[4919]: I0109 13:58:58.648894 4919 scope.go:117] "RemoveContainer" containerID="e6116e44a7eb8b6e2814f3b7ef6e29ffc3d24213552c3e5eb3666df0ccaea9ec" Jan 09 13:58:58 crc kubenswrapper[4919]: I0109 13:58:58.664163 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-crxrx"] Jan 09 13:58:58 crc kubenswrapper[4919]: I0109 13:58:58.674615 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-crxrx"] Jan 09 13:58:58 crc kubenswrapper[4919]: I0109 13:58:58.682462 4919 scope.go:117] "RemoveContainer" containerID="1bc90cb7f258bca0e94560b385fd292dd4ee92b24a3ddf4d03e9eed58e62c7a2" Jan 09 13:58:58 crc kubenswrapper[4919]: I0109 13:58:58.764016 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1969176c-2e40-4b30-9364-994e7f6d99e2" path="/var/lib/kubelet/pods/1969176c-2e40-4b30-9364-994e7f6d99e2/volumes" Jan 09 13:59:00 crc kubenswrapper[4919]: I0109 13:59:00.041769 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-69v6c"] Jan 09 13:59:00 crc kubenswrapper[4919]: I0109 13:59:00.054090 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-69v6c"] Jan 09 13:59:00 crc kubenswrapper[4919]: I0109 13:59:00.763940 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb24524c-29a1-45e7-bbea-76c32b236d1d" path="/var/lib/kubelet/pods/bb24524c-29a1-45e7-bbea-76c32b236d1d/volumes" Jan 09 13:59:01 crc kubenswrapper[4919]: I0109 13:59:01.029938 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-75d2-account-create-update-rpcwm"] Jan 09 13:59:01 crc kubenswrapper[4919]: I0109 13:59:01.039710 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-75d2-account-create-update-rpcwm"] Jan 09 13:59:02 crc kubenswrapper[4919]: I0109 13:59:02.764366 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0" path="/var/lib/kubelet/pods/4c5a8c9c-3770-4ed3-ac8a-919fc7bf82e0/volumes" Jan 09 13:59:03 crc kubenswrapper[4919]: I0109 13:59:03.028425 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-69wcc"] Jan 09 13:59:03 crc kubenswrapper[4919]: I0109 13:59:03.040037 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-69wcc"] Jan 09 13:59:04 crc kubenswrapper[4919]: I0109 13:59:04.035747 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wdlhd"] Jan 09 13:59:04 crc kubenswrapper[4919]: I0109 13:59:04.047309 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-dbd0-account-create-update-q42z2"] Jan 09 13:59:04 crc kubenswrapper[4919]: I0109 13:59:04.058307 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wdlhd"] Jan 09 13:59:04 crc kubenswrapper[4919]: I0109 13:59:04.067597 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-dbd0-account-create-update-q42z2"] Jan 09 13:59:04 crc kubenswrapper[4919]: I0109 13:59:04.763154 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d94fb93-5c21-4357-8efd-48b8285d4ad9" path="/var/lib/kubelet/pods/0d94fb93-5c21-4357-8efd-48b8285d4ad9/volumes" Jan 09 13:59:04 crc kubenswrapper[4919]: I0109 13:59:04.764169 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73a3d3cb-e4d2-4d33-8c46-27b6afa433fa" path="/var/lib/kubelet/pods/73a3d3cb-e4d2-4d33-8c46-27b6afa433fa/volumes" Jan 09 13:59:04 crc kubenswrapper[4919]: I0109 13:59:04.765027 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f403837b-7672-496e-bdf1-9334074246bd" path="/var/lib/kubelet/pods/f403837b-7672-496e-bdf1-9334074246bd/volumes" Jan 09 13:59:05 crc kubenswrapper[4919]: I0109 13:59:05.032666 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f265-account-create-update-zz4xh"] Jan 09 13:59:05 crc kubenswrapper[4919]: I0109 13:59:05.042556 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f265-account-create-update-zz4xh"] Jan 09 13:59:06 crc kubenswrapper[4919]: I0109 13:59:06.763028 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc589b0-3f7a-45c0-9fcd-1f69573d79c9" path="/var/lib/kubelet/pods/fbc589b0-3f7a-45c0-9fcd-1f69573d79c9/volumes" Jan 09 13:59:07 crc kubenswrapper[4919]: I0109 13:59:07.752056 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:59:07 crc kubenswrapper[4919]: E0109 13:59:07.752326 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:59:17 crc kubenswrapper[4919]: I0109 13:59:17.844507 4919 generic.go:334] "Generic (PLEG): container finished" podID="2e1540e3-6358-48ae-ac2a-08e90ab54cbb" containerID="045d503fca7670ba567a9b9a8ed7906ed35c99e674e5380e13ffc8eaff2f24b0" exitCode=0 Jan 09 13:59:17 crc kubenswrapper[4919]: I0109 13:59:17.844993 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" event={"ID":"2e1540e3-6358-48ae-ac2a-08e90ab54cbb","Type":"ContainerDied","Data":"045d503fca7670ba567a9b9a8ed7906ed35c99e674e5380e13ffc8eaff2f24b0"} Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.289567 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.371910 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-ssh-key-openstack-edpm-ipam\") pod \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.371975 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-inventory\") pod \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.372098 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-bootstrap-combined-ca-bundle\") pod \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.372264 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v42m\" (UniqueName: \"kubernetes.io/projected/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-kube-api-access-9v42m\") pod \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\" (UID: \"2e1540e3-6358-48ae-ac2a-08e90ab54cbb\") " Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.380592 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2e1540e3-6358-48ae-ac2a-08e90ab54cbb" (UID: "2e1540e3-6358-48ae-ac2a-08e90ab54cbb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.386915 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-kube-api-access-9v42m" (OuterVolumeSpecName: "kube-api-access-9v42m") pod "2e1540e3-6358-48ae-ac2a-08e90ab54cbb" (UID: "2e1540e3-6358-48ae-ac2a-08e90ab54cbb"). InnerVolumeSpecName "kube-api-access-9v42m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.406613 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-inventory" (OuterVolumeSpecName: "inventory") pod "2e1540e3-6358-48ae-ac2a-08e90ab54cbb" (UID: "2e1540e3-6358-48ae-ac2a-08e90ab54cbb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.409751 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2e1540e3-6358-48ae-ac2a-08e90ab54cbb" (UID: "2e1540e3-6358-48ae-ac2a-08e90ab54cbb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.475342 4919 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.475373 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v42m\" (UniqueName: \"kubernetes.io/projected/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-kube-api-access-9v42m\") on node \"crc\" DevicePath \"\"" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.475383 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.475392 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e1540e3-6358-48ae-ac2a-08e90ab54cbb-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.864883 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" event={"ID":"2e1540e3-6358-48ae-ac2a-08e90ab54cbb","Type":"ContainerDied","Data":"dcced499fcee0b8b850ee452f8c82d62ebb2272084b3b2f4a558ace01ac9e16c"} Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.864922 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcced499fcee0b8b850ee452f8c82d62ebb2272084b3b2f4a558ace01ac9e16c" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.864978 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.962035 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc"] Jan 09 13:59:19 crc kubenswrapper[4919]: E0109 13:59:19.962486 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerName="registry-server" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.962510 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerName="registry-server" Jan 09 13:59:19 crc kubenswrapper[4919]: E0109 13:59:19.962524 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerName="extract-content" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.962530 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerName="extract-content" Jan 09 13:59:19 crc kubenswrapper[4919]: E0109 13:59:19.962545 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerName="extract-utilities" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.962551 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerName="extract-utilities" Jan 09 13:59:19 crc kubenswrapper[4919]: E0109 13:59:19.962581 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1540e3-6358-48ae-ac2a-08e90ab54cbb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.962589 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1540e3-6358-48ae-ac2a-08e90ab54cbb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.962776 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="1969176c-2e40-4b30-9364-994e7f6d99e2" containerName="registry-server" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.962807 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1540e3-6358-48ae-ac2a-08e90ab54cbb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.963486 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.966351 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.966615 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.966914 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.967451 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 13:59:19 crc kubenswrapper[4919]: I0109 13:59:19.976751 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc"] Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.092565 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.092669 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29g7z\" (UniqueName: \"kubernetes.io/projected/3004c02a-530a-44c4-98b4-825dbb64296f-kube-api-access-29g7z\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.092698 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.194863 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29g7z\" (UniqueName: \"kubernetes.io/projected/3004c02a-530a-44c4-98b4-825dbb64296f-kube-api-access-29g7z\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.194917 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.195081 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.199983 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.200383 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.211121 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29g7z\" (UniqueName: \"kubernetes.io/projected/3004c02a-530a-44c4-98b4-825dbb64296f-kube-api-access-29g7z\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.291694 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.840115 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc"] Jan 09 13:59:20 crc kubenswrapper[4919]: I0109 13:59:20.876556 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" event={"ID":"3004c02a-530a-44c4-98b4-825dbb64296f","Type":"ContainerStarted","Data":"744f512c1f7990e6dd859ef430c641248049dea2c97cb82d510a3e0c07611f7d"} Jan 09 13:59:21 crc kubenswrapper[4919]: I0109 13:59:21.403067 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 13:59:21 crc kubenswrapper[4919]: I0109 13:59:21.888521 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" event={"ID":"3004c02a-530a-44c4-98b4-825dbb64296f","Type":"ContainerStarted","Data":"7a52f684c0429429a238e616b93240f35a5cc9010c93d06748f6082bc7fe7486"} Jan 09 13:59:21 crc kubenswrapper[4919]: I0109 13:59:21.910478 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" podStartSLOduration=2.355378732 podStartE2EDuration="2.910455029s" podCreationTimestamp="2026-01-09 13:59:19 +0000 UTC" firstStartedPulling="2026-01-09 13:59:20.845238522 +0000 UTC m=+1740.393077972" lastFinishedPulling="2026-01-09 13:59:21.400314799 +0000 UTC m=+1740.948154269" observedRunningTime="2026-01-09 13:59:21.904488901 +0000 UTC m=+1741.452328351" watchObservedRunningTime="2026-01-09 13:59:21.910455029 +0000 UTC m=+1741.458294479" Jan 09 13:59:22 crc kubenswrapper[4919]: I0109 13:59:22.752267 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:59:22 crc kubenswrapper[4919]: E0109 13:59:22.752777 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:59:28 crc kubenswrapper[4919]: I0109 13:59:28.043286 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fvgzp"] Jan 09 13:59:28 crc kubenswrapper[4919]: I0109 13:59:28.053267 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fvgzp"] Jan 09 13:59:28 crc kubenswrapper[4919]: I0109 13:59:28.762490 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb00094e-7c2c-45a0-b671-5d62017a9949" path="/var/lib/kubelet/pods/cb00094e-7c2c-45a0-b671-5d62017a9949/volumes" Jan 09 13:59:35 crc kubenswrapper[4919]: I0109 13:59:35.752356 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:59:35 crc kubenswrapper[4919]: E0109 13:59:35.753068 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:59:41 crc kubenswrapper[4919]: I0109 13:59:41.435417 4919 scope.go:117] "RemoveContainer" containerID="3f57e4304f8a937d060148f586b1ee9049ce09f06925a79ce28be71fe674db31" Jan 09 13:59:41 crc kubenswrapper[4919]: I0109 13:59:41.465245 4919 scope.go:117] "RemoveContainer" containerID="0d1162bd1f721137f911208bd44b21b01d8830967bcd1a09377bd62b195657d7" Jan 09 13:59:41 crc kubenswrapper[4919]: I0109 13:59:41.518774 4919 scope.go:117] "RemoveContainer" containerID="8f151431ca255d2c6528d9ba76af0fc54cfcf4ab5f8a1e66bff6682d28fb8fe1" Jan 09 13:59:41 crc kubenswrapper[4919]: I0109 13:59:41.565033 4919 scope.go:117] "RemoveContainer" containerID="69dd38811a1173c5c197d07abe7e1bbff59c4c62832138201846f5d4382975c0" Jan 09 13:59:41 crc kubenswrapper[4919]: I0109 13:59:41.612167 4919 scope.go:117] "RemoveContainer" containerID="785535cb288350778e32b34086653bb3a8c95948864a1d79155aae0821a687fd" Jan 09 13:59:41 crc kubenswrapper[4919]: I0109 13:59:41.664176 4919 scope.go:117] "RemoveContainer" containerID="f172beb8557aba10f75eab2d763cd23e6975146759bb22c52abbcd7b15cc1f89" Jan 09 13:59:41 crc kubenswrapper[4919]: I0109 13:59:41.711537 4919 scope.go:117] "RemoveContainer" containerID="5d0022ac857bc93d078cd1306472d3c440f2053e8cbaee0bf8a8f6f0da1eee88" Jan 09 13:59:46 crc kubenswrapper[4919]: I0109 13:59:46.751845 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:59:46 crc kubenswrapper[4919]: E0109 13:59:46.752557 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 13:59:51 crc kubenswrapper[4919]: I0109 13:59:51.044829 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wsjrx"] Jan 09 13:59:51 crc kubenswrapper[4919]: I0109 13:59:51.055288 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wsjrx"] Jan 09 13:59:52 crc kubenswrapper[4919]: I0109 13:59:52.763659 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15490209-86af-4f77-8103-27d097279b7d" path="/var/lib/kubelet/pods/15490209-86af-4f77-8103-27d097279b7d/volumes" Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.034052 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3d0f-account-create-update-bzj6x"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.046159 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0d0f-account-create-update-4tkms"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.060087 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-22wxq"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.072559 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-sl6nb"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.080724 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-22wxq"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.088944 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3d0f-account-create-update-bzj6x"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.097091 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0d0f-account-create-update-4tkms"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.105341 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-sl6nb"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.113102 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c29b-account-create-update-whsc2"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.122261 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-kfw6n"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.129760 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c29b-account-create-update-whsc2"] Jan 09 13:59:53 crc kubenswrapper[4919]: I0109 13:59:53.140440 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-kfw6n"] Jan 09 13:59:54 crc kubenswrapper[4919]: I0109 13:59:54.761725 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="167038ac-1986-4fec-ae8e-98807f212a49" path="/var/lib/kubelet/pods/167038ac-1986-4fec-ae8e-98807f212a49/volumes" Jan 09 13:59:54 crc kubenswrapper[4919]: I0109 13:59:54.762834 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214a0432-7622-45a7-b693-f5aea45623e7" path="/var/lib/kubelet/pods/214a0432-7622-45a7-b693-f5aea45623e7/volumes" Jan 09 13:59:54 crc kubenswrapper[4919]: I0109 13:59:54.763393 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48014bf6-50e6-407c-8fca-bd2949ad791c" path="/var/lib/kubelet/pods/48014bf6-50e6-407c-8fca-bd2949ad791c/volumes" Jan 09 13:59:54 crc kubenswrapper[4919]: I0109 13:59:54.763957 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a3e0515-9960-40e4-a938-6166810db59e" path="/var/lib/kubelet/pods/8a3e0515-9960-40e4-a938-6166810db59e/volumes" Jan 09 13:59:54 crc kubenswrapper[4919]: I0109 13:59:54.765049 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d480a45-ff2a-4672-bbf4-05a8a397b34a" path="/var/lib/kubelet/pods/9d480a45-ff2a-4672-bbf4-05a8a397b34a/volumes" Jan 09 13:59:54 crc kubenswrapper[4919]: I0109 13:59:54.765594 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67b8129-4a9a-4459-b85d-45ae30ad425e" path="/var/lib/kubelet/pods/e67b8129-4a9a-4459-b85d-45ae30ad425e/volumes" Jan 09 13:59:58 crc kubenswrapper[4919]: I0109 13:59:58.752086 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 13:59:58 crc kubenswrapper[4919]: E0109 13:59:58.752626 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.149071 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k"] Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.151523 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.154659 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.154769 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.159392 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k"] Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.324571 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-config-volume\") pod \"collect-profiles-29466120-8w44k\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.324691 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-secret-volume\") pod \"collect-profiles-29466120-8w44k\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.324726 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ghhs\" (UniqueName: \"kubernetes.io/projected/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-kube-api-access-5ghhs\") pod \"collect-profiles-29466120-8w44k\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.426634 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-config-volume\") pod \"collect-profiles-29466120-8w44k\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.426752 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-secret-volume\") pod \"collect-profiles-29466120-8w44k\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.426809 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ghhs\" (UniqueName: \"kubernetes.io/projected/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-kube-api-access-5ghhs\") pod \"collect-profiles-29466120-8w44k\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.427915 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-config-volume\") pod \"collect-profiles-29466120-8w44k\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.433259 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-secret-volume\") pod \"collect-profiles-29466120-8w44k\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.445250 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ghhs\" (UniqueName: \"kubernetes.io/projected/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-kube-api-access-5ghhs\") pod \"collect-profiles-29466120-8w44k\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.477641 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:00 crc kubenswrapper[4919]: I0109 14:00:00.902016 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k"] Jan 09 14:00:01 crc kubenswrapper[4919]: I0109 14:00:01.323375 4919 generic.go:334] "Generic (PLEG): container finished" podID="e33f5fbd-40cf-4172-9bc7-013d8f2aecac" containerID="cb7d670e2bd79020760b12cbbbdf3b223680cc4d93b68c85d052721a667cdc8c" exitCode=0 Jan 09 14:00:01 crc kubenswrapper[4919]: I0109 14:00:01.323476 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" event={"ID":"e33f5fbd-40cf-4172-9bc7-013d8f2aecac","Type":"ContainerDied","Data":"cb7d670e2bd79020760b12cbbbdf3b223680cc4d93b68c85d052721a667cdc8c"} Jan 09 14:00:01 crc kubenswrapper[4919]: I0109 14:00:01.323745 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" event={"ID":"e33f5fbd-40cf-4172-9bc7-013d8f2aecac","Type":"ContainerStarted","Data":"4f9e64e3feaf42538fce2305b21f1993bc06801019ffe24476b767a8c626d755"} Jan 09 14:00:02 crc kubenswrapper[4919]: I0109 14:00:02.662008 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:02 crc kubenswrapper[4919]: I0109 14:00:02.772284 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-secret-volume\") pod \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " Jan 09 14:00:02 crc kubenswrapper[4919]: I0109 14:00:02.772431 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-config-volume\") pod \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " Jan 09 14:00:02 crc kubenswrapper[4919]: I0109 14:00:02.772754 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ghhs\" (UniqueName: \"kubernetes.io/projected/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-kube-api-access-5ghhs\") pod \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\" (UID: \"e33f5fbd-40cf-4172-9bc7-013d8f2aecac\") " Jan 09 14:00:02 crc kubenswrapper[4919]: I0109 14:00:02.773192 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-config-volume" (OuterVolumeSpecName: "config-volume") pod "e33f5fbd-40cf-4172-9bc7-013d8f2aecac" (UID: "e33f5fbd-40cf-4172-9bc7-013d8f2aecac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 14:00:02 crc kubenswrapper[4919]: I0109 14:00:02.777894 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-kube-api-access-5ghhs" (OuterVolumeSpecName: "kube-api-access-5ghhs") pod "e33f5fbd-40cf-4172-9bc7-013d8f2aecac" (UID: "e33f5fbd-40cf-4172-9bc7-013d8f2aecac"). InnerVolumeSpecName "kube-api-access-5ghhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:00:02 crc kubenswrapper[4919]: I0109 14:00:02.778526 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e33f5fbd-40cf-4172-9bc7-013d8f2aecac" (UID: "e33f5fbd-40cf-4172-9bc7-013d8f2aecac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:00:02 crc kubenswrapper[4919]: I0109 14:00:02.874862 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ghhs\" (UniqueName: \"kubernetes.io/projected/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-kube-api-access-5ghhs\") on node \"crc\" DevicePath \"\"" Jan 09 14:00:02 crc kubenswrapper[4919]: I0109 14:00:02.874902 4919 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 14:00:02 crc kubenswrapper[4919]: I0109 14:00:02.874918 4919 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f5fbd-40cf-4172-9bc7-013d8f2aecac-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 14:00:03 crc kubenswrapper[4919]: I0109 14:00:03.345600 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" event={"ID":"e33f5fbd-40cf-4172-9bc7-013d8f2aecac","Type":"ContainerDied","Data":"4f9e64e3feaf42538fce2305b21f1993bc06801019ffe24476b767a8c626d755"} Jan 09 14:00:03 crc kubenswrapper[4919]: I0109 14:00:03.345631 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k" Jan 09 14:00:03 crc kubenswrapper[4919]: I0109 14:00:03.345646 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f9e64e3feaf42538fce2305b21f1993bc06801019ffe24476b767a8c626d755" Jan 09 14:00:04 crc kubenswrapper[4919]: I0109 14:00:04.039857 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-r7hqg"] Jan 09 14:00:04 crc kubenswrapper[4919]: I0109 14:00:04.050952 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-r7hqg"] Jan 09 14:00:04 crc kubenswrapper[4919]: I0109 14:00:04.762935 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f289e5-b950-489a-8207-5b340be14c0e" path="/var/lib/kubelet/pods/15f289e5-b950-489a-8207-5b340be14c0e/volumes" Jan 09 14:00:09 crc kubenswrapper[4919]: I0109 14:00:09.752870 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 14:00:09 crc kubenswrapper[4919]: E0109 14:00:09.753849 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:00:23 crc kubenswrapper[4919]: I0109 14:00:23.751922 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 14:00:23 crc kubenswrapper[4919]: E0109 14:00:23.752656 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:00:37 crc kubenswrapper[4919]: I0109 14:00:37.751805 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 14:00:37 crc kubenswrapper[4919]: E0109 14:00:37.752550 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:00:38 crc kubenswrapper[4919]: I0109 14:00:38.045122 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-dr79l"] Jan 09 14:00:38 crc kubenswrapper[4919]: I0109 14:00:38.053790 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-dr79l"] Jan 09 14:00:38 crc kubenswrapper[4919]: I0109 14:00:38.764263 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a26e4dbc-6f44-4723-a81b-7bd05ca1283b" path="/var/lib/kubelet/pods/a26e4dbc-6f44-4723-a81b-7bd05ca1283b/volumes" Jan 09 14:00:41 crc kubenswrapper[4919]: I0109 14:00:41.936282 4919 scope.go:117] "RemoveContainer" containerID="dbd2a2bdc58241cb9be3f331afbd1f293e628991b55d61b9e0aca7e77ad79ad0" Jan 09 14:00:41 crc kubenswrapper[4919]: I0109 14:00:41.975997 4919 scope.go:117] "RemoveContainer" containerID="32e59b2af8a5c14138c785f272069d098c006143a8cfb7c0136747704e10696b" Jan 09 14:00:42 crc kubenswrapper[4919]: I0109 14:00:42.003106 4919 scope.go:117] "RemoveContainer" containerID="8ddc5dfe271364c57400e27600a828c70e8cfcd4e51a2e2cd5364d9531896ab2" Jan 09 14:00:42 crc kubenswrapper[4919]: I0109 14:00:42.052000 4919 scope.go:117] "RemoveContainer" containerID="b0fdb1ebe025da04446a4bb7b5505bb79f5bd57e91d575bbd6b63cded83049e8" Jan 09 14:00:42 crc kubenswrapper[4919]: I0109 14:00:42.108665 4919 scope.go:117] "RemoveContainer" containerID="d95ed877aa23f256c846188c9ee6793f1e6c3399af3395a333cac7b29cc5e94a" Jan 09 14:00:42 crc kubenswrapper[4919]: I0109 14:00:42.189895 4919 scope.go:117] "RemoveContainer" containerID="d9959a3c0d1479bc458e344a5d27a2ed1fd84088d28c3a15200dba329e6d8ca1" Jan 09 14:00:42 crc kubenswrapper[4919]: I0109 14:00:42.228724 4919 scope.go:117] "RemoveContainer" containerID="58cabe75a8d1fd1bd07963b8759e6b024bb1922ce4f4d49a6bf51313bb19676d" Jan 09 14:00:42 crc kubenswrapper[4919]: I0109 14:00:42.271683 4919 scope.go:117] "RemoveContainer" containerID="8c19d822b0f883e5ca46e5d101df09d756d76d8ac4747c87e95d6d872ee8302a" Jan 09 14:00:42 crc kubenswrapper[4919]: I0109 14:00:42.293480 4919 scope.go:117] "RemoveContainer" containerID="f5a76717a74b6910494ded40567c72acf3a245e327888a7d6e4c26c180117995" Jan 09 14:00:44 crc kubenswrapper[4919]: I0109 14:00:44.032639 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qvmd7"] Jan 09 14:00:44 crc kubenswrapper[4919]: I0109 14:00:44.046476 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qvmd7"] Jan 09 14:00:44 crc kubenswrapper[4919]: I0109 14:00:44.794488 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd2e6850-0b12-460b-9da8-56a74f4324f3" path="/var/lib/kubelet/pods/fd2e6850-0b12-460b-9da8-56a74f4324f3/volumes" Jan 09 14:00:52 crc kubenswrapper[4919]: I0109 14:00:52.039037 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-425dh"] Jan 09 14:00:52 crc kubenswrapper[4919]: I0109 14:00:52.049755 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-425dh"] Jan 09 14:00:52 crc kubenswrapper[4919]: I0109 14:00:52.751291 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 14:00:52 crc kubenswrapper[4919]: E0109 14:00:52.751606 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:00:52 crc kubenswrapper[4919]: I0109 14:00:52.764184 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e28fcf-1c97-40cf-bcdc-d63d2af19499" path="/var/lib/kubelet/pods/93e28fcf-1c97-40cf-bcdc-d63d2af19499/volumes" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.145540 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29466121-rx8p6"] Jan 09 14:01:00 crc kubenswrapper[4919]: E0109 14:01:00.146481 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e33f5fbd-40cf-4172-9bc7-013d8f2aecac" containerName="collect-profiles" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.146496 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="e33f5fbd-40cf-4172-9bc7-013d8f2aecac" containerName="collect-profiles" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.146687 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="e33f5fbd-40cf-4172-9bc7-013d8f2aecac" containerName="collect-profiles" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.147386 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.161281 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29466121-rx8p6"] Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.250890 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-fernet-keys\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.251300 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-config-data\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.251465 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l56sc\" (UniqueName: \"kubernetes.io/projected/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-kube-api-access-l56sc\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.251753 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-combined-ca-bundle\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.353925 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-combined-ca-bundle\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.354010 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-fernet-keys\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.354059 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-config-data\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.354091 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l56sc\" (UniqueName: \"kubernetes.io/projected/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-kube-api-access-l56sc\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.361773 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-config-data\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.361824 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-fernet-keys\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.367146 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-combined-ca-bundle\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.371820 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l56sc\" (UniqueName: \"kubernetes.io/projected/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-kube-api-access-l56sc\") pod \"keystone-cron-29466121-rx8p6\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.471421 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:00 crc kubenswrapper[4919]: I0109 14:01:00.923369 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29466121-rx8p6"] Jan 09 14:01:01 crc kubenswrapper[4919]: I0109 14:01:01.835122 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466121-rx8p6" event={"ID":"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111","Type":"ContainerStarted","Data":"a9dc39e2719dde4f72a2dfd6b29a936529cf6b88c31f71973668b1b8fb5d9783"} Jan 09 14:01:01 crc kubenswrapper[4919]: I0109 14:01:01.835864 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466121-rx8p6" event={"ID":"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111","Type":"ContainerStarted","Data":"511305fe84dca8433a3106b58de2aafc93713e8e3a30ee885d98c015b9b4c8a1"} Jan 09 14:01:01 crc kubenswrapper[4919]: I0109 14:01:01.863853 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29466121-rx8p6" podStartSLOduration=1.863828101 podStartE2EDuration="1.863828101s" podCreationTimestamp="2026-01-09 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 14:01:01.857896793 +0000 UTC m=+1841.405736243" watchObservedRunningTime="2026-01-09 14:01:01.863828101 +0000 UTC m=+1841.411667551" Jan 09 14:01:03 crc kubenswrapper[4919]: I0109 14:01:03.029756 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9sb8m"] Jan 09 14:01:03 crc kubenswrapper[4919]: I0109 14:01:03.042045 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9sb8m"] Jan 09 14:01:03 crc kubenswrapper[4919]: I0109 14:01:03.052072 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vz5pd"] Jan 09 14:01:03 crc kubenswrapper[4919]: I0109 14:01:03.062552 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vz5pd"] Jan 09 14:01:03 crc kubenswrapper[4919]: I0109 14:01:03.857768 4919 generic.go:334] "Generic (PLEG): container finished" podID="e8fd615e-ac5c-4caa-8eaf-5c99df3fa111" containerID="a9dc39e2719dde4f72a2dfd6b29a936529cf6b88c31f71973668b1b8fb5d9783" exitCode=0 Jan 09 14:01:03 crc kubenswrapper[4919]: I0109 14:01:03.857847 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466121-rx8p6" event={"ID":"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111","Type":"ContainerDied","Data":"a9dc39e2719dde4f72a2dfd6b29a936529cf6b88c31f71973668b1b8fb5d9783"} Jan 09 14:01:04 crc kubenswrapper[4919]: I0109 14:01:04.768138 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a9f81fc-067d-404d-b104-bba333d3911a" path="/var/lib/kubelet/pods/0a9f81fc-067d-404d-b104-bba333d3911a/volumes" Jan 09 14:01:04 crc kubenswrapper[4919]: I0109 14:01:04.770062 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec76c49-6c38-4168-ac7b-087460106d25" path="/var/lib/kubelet/pods/bec76c49-6c38-4168-ac7b-087460106d25/volumes" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.200190 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.362997 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l56sc\" (UniqueName: \"kubernetes.io/projected/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-kube-api-access-l56sc\") pod \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.363073 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-config-data\") pod \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.363150 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-fernet-keys\") pod \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.363385 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-combined-ca-bundle\") pod \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\" (UID: \"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111\") " Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.368506 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-kube-api-access-l56sc" (OuterVolumeSpecName: "kube-api-access-l56sc") pod "e8fd615e-ac5c-4caa-8eaf-5c99df3fa111" (UID: "e8fd615e-ac5c-4caa-8eaf-5c99df3fa111"). InnerVolumeSpecName "kube-api-access-l56sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.383877 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e8fd615e-ac5c-4caa-8eaf-5c99df3fa111" (UID: "e8fd615e-ac5c-4caa-8eaf-5c99df3fa111"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.392358 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8fd615e-ac5c-4caa-8eaf-5c99df3fa111" (UID: "e8fd615e-ac5c-4caa-8eaf-5c99df3fa111"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.423523 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-config-data" (OuterVolumeSpecName: "config-data") pod "e8fd615e-ac5c-4caa-8eaf-5c99df3fa111" (UID: "e8fd615e-ac5c-4caa-8eaf-5c99df3fa111"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.466452 4919 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.466502 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l56sc\" (UniqueName: \"kubernetes.io/projected/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-kube-api-access-l56sc\") on node \"crc\" DevicePath \"\"" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.466718 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.466729 4919 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8fd615e-ac5c-4caa-8eaf-5c99df3fa111-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.876534 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466121-rx8p6" event={"ID":"e8fd615e-ac5c-4caa-8eaf-5c99df3fa111","Type":"ContainerDied","Data":"511305fe84dca8433a3106b58de2aafc93713e8e3a30ee885d98c015b9b4c8a1"} Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.877682 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511305fe84dca8433a3106b58de2aafc93713e8e3a30ee885d98c015b9b4c8a1" Jan 09 14:01:05 crc kubenswrapper[4919]: I0109 14:01:05.876834 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466121-rx8p6" Jan 09 14:01:07 crc kubenswrapper[4919]: I0109 14:01:07.752061 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 14:01:07 crc kubenswrapper[4919]: E0109 14:01:07.752431 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:01:21 crc kubenswrapper[4919]: I0109 14:01:21.752323 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 14:01:21 crc kubenswrapper[4919]: E0109 14:01:21.753191 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:01:26 crc kubenswrapper[4919]: I0109 14:01:26.035298 4919 generic.go:334] "Generic (PLEG): container finished" podID="3004c02a-530a-44c4-98b4-825dbb64296f" containerID="7a52f684c0429429a238e616b93240f35a5cc9010c93d06748f6082bc7fe7486" exitCode=0 Jan 09 14:01:26 crc kubenswrapper[4919]: I0109 14:01:26.035375 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" event={"ID":"3004c02a-530a-44c4-98b4-825dbb64296f","Type":"ContainerDied","Data":"7a52f684c0429429a238e616b93240f35a5cc9010c93d06748f6082bc7fe7486"} Jan 09 14:01:27 crc kubenswrapper[4919]: I0109 14:01:27.454843 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 14:01:27 crc kubenswrapper[4919]: I0109 14:01:27.513752 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-ssh-key-openstack-edpm-ipam\") pod \"3004c02a-530a-44c4-98b4-825dbb64296f\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " Jan 09 14:01:27 crc kubenswrapper[4919]: I0109 14:01:27.513883 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-inventory\") pod \"3004c02a-530a-44c4-98b4-825dbb64296f\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " Jan 09 14:01:27 crc kubenswrapper[4919]: I0109 14:01:27.514059 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29g7z\" (UniqueName: \"kubernetes.io/projected/3004c02a-530a-44c4-98b4-825dbb64296f-kube-api-access-29g7z\") pod \"3004c02a-530a-44c4-98b4-825dbb64296f\" (UID: \"3004c02a-530a-44c4-98b4-825dbb64296f\") " Jan 09 14:01:27 crc kubenswrapper[4919]: I0109 14:01:27.520448 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3004c02a-530a-44c4-98b4-825dbb64296f-kube-api-access-29g7z" (OuterVolumeSpecName: "kube-api-access-29g7z") pod "3004c02a-530a-44c4-98b4-825dbb64296f" (UID: "3004c02a-530a-44c4-98b4-825dbb64296f"). InnerVolumeSpecName "kube-api-access-29g7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:01:27 crc kubenswrapper[4919]: I0109 14:01:27.540010 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3004c02a-530a-44c4-98b4-825dbb64296f" (UID: "3004c02a-530a-44c4-98b4-825dbb64296f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:01:27 crc kubenswrapper[4919]: I0109 14:01:27.544316 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-inventory" (OuterVolumeSpecName: "inventory") pod "3004c02a-530a-44c4-98b4-825dbb64296f" (UID: "3004c02a-530a-44c4-98b4-825dbb64296f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:01:27 crc kubenswrapper[4919]: I0109 14:01:27.617167 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:01:27 crc kubenswrapper[4919]: I0109 14:01:27.617205 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3004c02a-530a-44c4-98b4-825dbb64296f-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:01:27 crc kubenswrapper[4919]: I0109 14:01:27.617232 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29g7z\" (UniqueName: \"kubernetes.io/projected/3004c02a-530a-44c4-98b4-825dbb64296f-kube-api-access-29g7z\") on node \"crc\" DevicePath \"\"" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.056168 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" event={"ID":"3004c02a-530a-44c4-98b4-825dbb64296f","Type":"ContainerDied","Data":"744f512c1f7990e6dd859ef430c641248049dea2c97cb82d510a3e0c07611f7d"} Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.056228 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="744f512c1f7990e6dd859ef430c641248049dea2c97cb82d510a3e0c07611f7d" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.056259 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.127667 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld"] Jan 09 14:01:28 crc kubenswrapper[4919]: E0109 14:01:28.128058 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8fd615e-ac5c-4caa-8eaf-5c99df3fa111" containerName="keystone-cron" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.128070 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fd615e-ac5c-4caa-8eaf-5c99df3fa111" containerName="keystone-cron" Jan 09 14:01:28 crc kubenswrapper[4919]: E0109 14:01:28.128112 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3004c02a-530a-44c4-98b4-825dbb64296f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.128119 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3004c02a-530a-44c4-98b4-825dbb64296f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.128308 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8fd615e-ac5c-4caa-8eaf-5c99df3fa111" containerName="keystone-cron" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.128337 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="3004c02a-530a-44c4-98b4-825dbb64296f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.128974 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.131956 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.132024 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.133297 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.133385 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.136146 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld"] Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.228855 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m8pld\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.228910 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m8pld\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.229132 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjrpp\" (UniqueName: \"kubernetes.io/projected/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-kube-api-access-vjrpp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m8pld\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.330975 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m8pld\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.331035 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m8pld\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.331098 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjrpp\" (UniqueName: \"kubernetes.io/projected/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-kube-api-access-vjrpp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m8pld\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.334544 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m8pld\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.334615 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m8pld\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.348968 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjrpp\" (UniqueName: \"kubernetes.io/projected/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-kube-api-access-vjrpp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m8pld\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.444842 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:01:28 crc kubenswrapper[4919]: I0109 14:01:28.925626 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld"] Jan 09 14:01:29 crc kubenswrapper[4919]: I0109 14:01:29.065250 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" event={"ID":"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c","Type":"ContainerStarted","Data":"2f13f3a4b3adbfa3236d0dc0385e470def94f4cf05faf69ec487c35a8e8e4c2d"} Jan 09 14:01:30 crc kubenswrapper[4919]: I0109 14:01:30.075244 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" event={"ID":"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c","Type":"ContainerStarted","Data":"cd1ecd84ef803845fdd3c42d3bd2ad2c4431443f5214d6b2b80ba9297e5d86da"} Jan 09 14:01:30 crc kubenswrapper[4919]: I0109 14:01:30.089840 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" podStartSLOduration=1.490647261 podStartE2EDuration="2.089814405s" podCreationTimestamp="2026-01-09 14:01:28 +0000 UTC" firstStartedPulling="2026-01-09 14:01:28.930155708 +0000 UTC m=+1868.477995158" lastFinishedPulling="2026-01-09 14:01:29.529322852 +0000 UTC m=+1869.077162302" observedRunningTime="2026-01-09 14:01:30.088145353 +0000 UTC m=+1869.635984813" watchObservedRunningTime="2026-01-09 14:01:30.089814405 +0000 UTC m=+1869.637653855" Jan 09 14:01:33 crc kubenswrapper[4919]: I0109 14:01:33.752451 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 14:01:33 crc kubenswrapper[4919]: E0109 14:01:33.753274 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:01:42 crc kubenswrapper[4919]: I0109 14:01:42.529479 4919 scope.go:117] "RemoveContainer" containerID="a57cd495eb14623d3434d4a3a0e51585d8dd21fdd1d577a2c61487d78f1465a7" Jan 09 14:01:42 crc kubenswrapper[4919]: I0109 14:01:42.568469 4919 scope.go:117] "RemoveContainer" containerID="df6daf2cbbff0faad419762c41563157d6bc7046e79029c50803e04a858dbbc8" Jan 09 14:01:42 crc kubenswrapper[4919]: I0109 14:01:42.646472 4919 scope.go:117] "RemoveContainer" containerID="7a478b94cf5b0b6db679856218fe283b1b21278f01c109914d4ae4d4c0f1c30a" Jan 09 14:01:42 crc kubenswrapper[4919]: I0109 14:01:42.707537 4919 scope.go:117] "RemoveContainer" containerID="b0ef61d3089ead87370e9d64df135e8ce258018b5271a18bbbf3ff9b807454b1" Jan 09 14:01:44 crc kubenswrapper[4919]: I0109 14:01:44.751796 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 14:01:44 crc kubenswrapper[4919]: E0109 14:01:44.752358 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:01:49 crc kubenswrapper[4919]: I0109 14:01:49.048565 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bsnzf"] Jan 09 14:01:49 crc kubenswrapper[4919]: I0109 14:01:49.058389 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1600-account-create-update-hvtzf"] Jan 09 14:01:49 crc kubenswrapper[4919]: I0109 14:01:49.067637 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b660-account-create-update-b5bxk"] Jan 09 14:01:49 crc kubenswrapper[4919]: I0109 14:01:49.078128 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1600-account-create-update-hvtzf"] Jan 09 14:01:49 crc kubenswrapper[4919]: I0109 14:01:49.086076 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bsnzf"] Jan 09 14:01:49 crc kubenswrapper[4919]: I0109 14:01:49.094368 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b660-account-create-update-b5bxk"] Jan 09 14:01:49 crc kubenswrapper[4919]: I0109 14:01:49.102513 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-td64x"] Jan 09 14:01:49 crc kubenswrapper[4919]: I0109 14:01:49.109773 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-td64x"] Jan 09 14:01:50 crc kubenswrapper[4919]: I0109 14:01:50.031398 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dxcpg"] Jan 09 14:01:50 crc kubenswrapper[4919]: I0109 14:01:50.044584 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-490e-account-create-update-lwv2k"] Jan 09 14:01:50 crc kubenswrapper[4919]: I0109 14:01:50.054780 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dxcpg"] Jan 09 14:01:50 crc kubenswrapper[4919]: I0109 14:01:50.063122 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-490e-account-create-update-lwv2k"] Jan 09 14:01:50 crc kubenswrapper[4919]: I0109 14:01:50.766173 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f76563c-d515-4fdf-9011-6612ff2b5665" path="/var/lib/kubelet/pods/2f76563c-d515-4fdf-9011-6612ff2b5665/volumes" Jan 09 14:01:50 crc kubenswrapper[4919]: I0109 14:01:50.766835 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5333c11-0798-492a-862a-d6c9076a5fe6" path="/var/lib/kubelet/pods/d5333c11-0798-492a-862a-d6c9076a5fe6/volumes" Jan 09 14:01:50 crc kubenswrapper[4919]: I0109 14:01:50.767376 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0aa76ff-ed23-4978-8fe0-c0144d775a7a" path="/var/lib/kubelet/pods/e0aa76ff-ed23-4978-8fe0-c0144d775a7a/volumes" Jan 09 14:01:50 crc kubenswrapper[4919]: I0109 14:01:50.767924 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea5378b8-a527-4f7f-b55a-48590aae7ff1" path="/var/lib/kubelet/pods/ea5378b8-a527-4f7f-b55a-48590aae7ff1/volumes" Jan 09 14:01:50 crc kubenswrapper[4919]: I0109 14:01:50.769002 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0d18dc7-6c07-4aab-b06f-91137d1809b0" path="/var/lib/kubelet/pods/f0d18dc7-6c07-4aab-b06f-91137d1809b0/volumes" Jan 09 14:01:50 crc kubenswrapper[4919]: I0109 14:01:50.769578 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba162e0-e000-4b80-8a7f-94699ad1c121" path="/var/lib/kubelet/pods/fba162e0-e000-4b80-8a7f-94699ad1c121/volumes" Jan 09 14:01:59 crc kubenswrapper[4919]: I0109 14:01:59.752628 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 14:02:00 crc kubenswrapper[4919]: I0109 14:02:00.353637 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"08b1c11299df27ad98a3ca953b44e2744c53ddd036341f81b00480965189197d"} Jan 09 14:02:26 crc kubenswrapper[4919]: I0109 14:02:26.045390 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rb9m6"] Jan 09 14:02:26 crc kubenswrapper[4919]: I0109 14:02:26.055596 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rb9m6"] Jan 09 14:02:26 crc kubenswrapper[4919]: I0109 14:02:26.763526 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7dc0da4-2c09-43c9-bc81-ff75106ce3c7" path="/var/lib/kubelet/pods/b7dc0da4-2c09-43c9-bc81-ff75106ce3c7/volumes" Jan 09 14:02:42 crc kubenswrapper[4919]: I0109 14:02:42.713586 4919 generic.go:334] "Generic (PLEG): container finished" podID="eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c" containerID="cd1ecd84ef803845fdd3c42d3bd2ad2c4431443f5214d6b2b80ba9297e5d86da" exitCode=0 Jan 09 14:02:42 crc kubenswrapper[4919]: I0109 14:02:42.713739 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" event={"ID":"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c","Type":"ContainerDied","Data":"cd1ecd84ef803845fdd3c42d3bd2ad2c4431443f5214d6b2b80ba9297e5d86da"} Jan 09 14:02:42 crc kubenswrapper[4919]: I0109 14:02:42.818598 4919 scope.go:117] "RemoveContainer" containerID="6711e5e1fab242feb1d2916c90800292edf2c3e009f05f085197bce499324ec0" Jan 09 14:02:42 crc kubenswrapper[4919]: I0109 14:02:42.863681 4919 scope.go:117] "RemoveContainer" containerID="9e77fda6b192837f6b1440b756268f5eca3b103d38e78ae8709ffd42ec4cf6f4" Jan 09 14:02:42 crc kubenswrapper[4919]: I0109 14:02:42.919772 4919 scope.go:117] "RemoveContainer" containerID="0c6be3d93024838ec9d2200c3eb1dcb89b5da60928d8ff7910e89c5ddebd5334" Jan 09 14:02:42 crc kubenswrapper[4919]: I0109 14:02:42.974154 4919 scope.go:117] "RemoveContainer" containerID="302c4ffbe73f0b7e012dcf6e0e5022508d22d1c34d7d71ba6171b353a4aaa517" Jan 09 14:02:43 crc kubenswrapper[4919]: I0109 14:02:43.015024 4919 scope.go:117] "RemoveContainer" containerID="2832a878eac12229938b3d5f9d5d660a40ae2a6dbe1ed905d39e74eed5bd3d35" Jan 09 14:02:43 crc kubenswrapper[4919]: I0109 14:02:43.056512 4919 scope.go:117] "RemoveContainer" containerID="d97fa5b1b0175126fee6569134ed337b4981072484c0fff138e95d0067cbd0c0" Jan 09 14:02:43 crc kubenswrapper[4919]: I0109 14:02:43.099707 4919 scope.go:117] "RemoveContainer" containerID="e641d4986d457062131c03c85466779c9a0d0deeab44195975d59efb0b697668" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.109615 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.255515 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjrpp\" (UniqueName: \"kubernetes.io/projected/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-kube-api-access-vjrpp\") pod \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.255819 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-ssh-key-openstack-edpm-ipam\") pod \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.256059 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-inventory\") pod \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\" (UID: \"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c\") " Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.262172 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-kube-api-access-vjrpp" (OuterVolumeSpecName: "kube-api-access-vjrpp") pod "eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c" (UID: "eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c"). InnerVolumeSpecName "kube-api-access-vjrpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.283516 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c" (UID: "eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.287289 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-inventory" (OuterVolumeSpecName: "inventory") pod "eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c" (UID: "eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.358742 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjrpp\" (UniqueName: \"kubernetes.io/projected/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-kube-api-access-vjrpp\") on node \"crc\" DevicePath \"\"" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.358774 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.358785 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.732994 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" event={"ID":"eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c","Type":"ContainerDied","Data":"2f13f3a4b3adbfa3236d0dc0385e470def94f4cf05faf69ec487c35a8e8e4c2d"} Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.733035 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f13f3a4b3adbfa3236d0dc0385e470def94f4cf05faf69ec487c35a8e8e4c2d" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.733059 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m8pld" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.821451 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5"] Jan 09 14:02:44 crc kubenswrapper[4919]: E0109 14:02:44.823209 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.823312 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.823538 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.824358 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.826780 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.827411 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.827878 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.831044 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5"] Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.831497 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.970140 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.970250 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462bv\" (UniqueName: \"kubernetes.io/projected/89e73a14-acf2-4c6b-94de-a8857e0cf22d-kube-api-access-462bv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:44 crc kubenswrapper[4919]: I0109 14:02:44.970271 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:45 crc kubenswrapper[4919]: I0109 14:02:45.072420 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:45 crc kubenswrapper[4919]: I0109 14:02:45.072551 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-462bv\" (UniqueName: \"kubernetes.io/projected/89e73a14-acf2-4c6b-94de-a8857e0cf22d-kube-api-access-462bv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:45 crc kubenswrapper[4919]: I0109 14:02:45.072585 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:45 crc kubenswrapper[4919]: I0109 14:02:45.079664 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:45 crc kubenswrapper[4919]: I0109 14:02:45.080984 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:45 crc kubenswrapper[4919]: I0109 14:02:45.090023 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-462bv\" (UniqueName: \"kubernetes.io/projected/89e73a14-acf2-4c6b-94de-a8857e0cf22d-kube-api-access-462bv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:45 crc kubenswrapper[4919]: I0109 14:02:45.153853 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:45 crc kubenswrapper[4919]: I0109 14:02:45.666520 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5"] Jan 09 14:02:45 crc kubenswrapper[4919]: I0109 14:02:45.673957 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 14:02:45 crc kubenswrapper[4919]: I0109 14:02:45.744771 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" event={"ID":"89e73a14-acf2-4c6b-94de-a8857e0cf22d","Type":"ContainerStarted","Data":"2e8ba941818f60f3f6d2e9acb4ed47ff219bf670fc77b3c33a2d18ca43a30987"} Jan 09 14:02:46 crc kubenswrapper[4919]: I0109 14:02:46.762620 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" event={"ID":"89e73a14-acf2-4c6b-94de-a8857e0cf22d","Type":"ContainerStarted","Data":"7fad48fa85b686084f9f4a429bbc0add01d4e583854f3ab36746a792af8fdcd7"} Jan 09 14:02:46 crc kubenswrapper[4919]: I0109 14:02:46.781327 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" podStartSLOduration=2.301947739 podStartE2EDuration="2.781299803s" podCreationTimestamp="2026-01-09 14:02:44 +0000 UTC" firstStartedPulling="2026-01-09 14:02:45.673648726 +0000 UTC m=+1945.221488176" lastFinishedPulling="2026-01-09 14:02:46.15300079 +0000 UTC m=+1945.700840240" observedRunningTime="2026-01-09 14:02:46.772749601 +0000 UTC m=+1946.320589051" watchObservedRunningTime="2026-01-09 14:02:46.781299803 +0000 UTC m=+1946.329139263" Jan 09 14:02:48 crc kubenswrapper[4919]: I0109 14:02:48.033655 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-jtq7r"] Jan 09 14:02:48 crc kubenswrapper[4919]: I0109 14:02:48.046376 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-jtq7r"] Jan 09 14:02:48 crc kubenswrapper[4919]: I0109 14:02:48.763546 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c75bc15-846d-4551-9f91-8d16579b5e82" path="/var/lib/kubelet/pods/8c75bc15-846d-4551-9f91-8d16579b5e82/volumes" Jan 09 14:02:50 crc kubenswrapper[4919]: I0109 14:02:50.042570 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dflxc"] Jan 09 14:02:50 crc kubenswrapper[4919]: I0109 14:02:50.055711 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dflxc"] Jan 09 14:02:50 crc kubenswrapper[4919]: I0109 14:02:50.775304 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ba833a-4cf7-4caf-8d94-efc794319d9a" path="/var/lib/kubelet/pods/01ba833a-4cf7-4caf-8d94-efc794319d9a/volumes" Jan 09 14:02:51 crc kubenswrapper[4919]: I0109 14:02:51.805912 4919 generic.go:334] "Generic (PLEG): container finished" podID="89e73a14-acf2-4c6b-94de-a8857e0cf22d" containerID="7fad48fa85b686084f9f4a429bbc0add01d4e583854f3ab36746a792af8fdcd7" exitCode=0 Jan 09 14:02:51 crc kubenswrapper[4919]: I0109 14:02:51.806116 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" event={"ID":"89e73a14-acf2-4c6b-94de-a8857e0cf22d","Type":"ContainerDied","Data":"7fad48fa85b686084f9f4a429bbc0add01d4e583854f3ab36746a792af8fdcd7"} Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.304046 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.340108 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-ssh-key-openstack-edpm-ipam\") pod \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.340347 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-462bv\" (UniqueName: \"kubernetes.io/projected/89e73a14-acf2-4c6b-94de-a8857e0cf22d-kube-api-access-462bv\") pod \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.340534 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-inventory\") pod \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\" (UID: \"89e73a14-acf2-4c6b-94de-a8857e0cf22d\") " Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.345628 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e73a14-acf2-4c6b-94de-a8857e0cf22d-kube-api-access-462bv" (OuterVolumeSpecName: "kube-api-access-462bv") pod "89e73a14-acf2-4c6b-94de-a8857e0cf22d" (UID: "89e73a14-acf2-4c6b-94de-a8857e0cf22d"). InnerVolumeSpecName "kube-api-access-462bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.370787 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-inventory" (OuterVolumeSpecName: "inventory") pod "89e73a14-acf2-4c6b-94de-a8857e0cf22d" (UID: "89e73a14-acf2-4c6b-94de-a8857e0cf22d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.372114 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89e73a14-acf2-4c6b-94de-a8857e0cf22d" (UID: "89e73a14-acf2-4c6b-94de-a8857e0cf22d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.443746 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-462bv\" (UniqueName: \"kubernetes.io/projected/89e73a14-acf2-4c6b-94de-a8857e0cf22d-kube-api-access-462bv\") on node \"crc\" DevicePath \"\"" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.443804 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.443819 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89e73a14-acf2-4c6b-94de-a8857e0cf22d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.826381 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" event={"ID":"89e73a14-acf2-4c6b-94de-a8857e0cf22d","Type":"ContainerDied","Data":"2e8ba941818f60f3f6d2e9acb4ed47ff219bf670fc77b3c33a2d18ca43a30987"} Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.826707 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8ba941818f60f3f6d2e9acb4ed47ff219bf670fc77b3c33a2d18ca43a30987" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.826446 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.908531 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw"] Jan 09 14:02:53 crc kubenswrapper[4919]: E0109 14:02:53.908985 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e73a14-acf2-4c6b-94de-a8857e0cf22d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.909005 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e73a14-acf2-4c6b-94de-a8857e0cf22d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.909253 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e73a14-acf2-4c6b-94de-a8857e0cf22d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.910001 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.912282 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.912559 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.914865 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.919126 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.942723 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw"] Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.955842 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f727n\" (UniqueName: \"kubernetes.io/projected/d079d443-cf8c-47ff-96d9-a3fe59583ad8-kube-api-access-f727n\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b9fzw\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.955902 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b9fzw\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:53 crc kubenswrapper[4919]: I0109 14:02:53.956460 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b9fzw\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:54 crc kubenswrapper[4919]: I0109 14:02:54.057799 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f727n\" (UniqueName: \"kubernetes.io/projected/d079d443-cf8c-47ff-96d9-a3fe59583ad8-kube-api-access-f727n\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b9fzw\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:54 crc kubenswrapper[4919]: I0109 14:02:54.057877 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b9fzw\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:54 crc kubenswrapper[4919]: I0109 14:02:54.057990 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b9fzw\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:54 crc kubenswrapper[4919]: I0109 14:02:54.062564 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b9fzw\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:54 crc kubenswrapper[4919]: I0109 14:02:54.062595 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b9fzw\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:54 crc kubenswrapper[4919]: I0109 14:02:54.077392 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f727n\" (UniqueName: \"kubernetes.io/projected/d079d443-cf8c-47ff-96d9-a3fe59583ad8-kube-api-access-f727n\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b9fzw\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:54 crc kubenswrapper[4919]: I0109 14:02:54.241635 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:02:54 crc kubenswrapper[4919]: I0109 14:02:54.746837 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw"] Jan 09 14:02:54 crc kubenswrapper[4919]: I0109 14:02:54.836707 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" event={"ID":"d079d443-cf8c-47ff-96d9-a3fe59583ad8","Type":"ContainerStarted","Data":"4eeb884f427ea6c839f9e7e7a6114e53dbb96325514ed212031a13c72f9b0e16"} Jan 09 14:02:59 crc kubenswrapper[4919]: I0109 14:02:59.885749 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" event={"ID":"d079d443-cf8c-47ff-96d9-a3fe59583ad8","Type":"ContainerStarted","Data":"f39e90b49280cdf08b7288bf4459e5390b008f3fa8da2b912b099cbf69fecd60"} Jan 09 14:02:59 crc kubenswrapper[4919]: I0109 14:02:59.905235 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" podStartSLOduration=2.433713352 podStartE2EDuration="6.905186125s" podCreationTimestamp="2026-01-09 14:02:53 +0000 UTC" firstStartedPulling="2026-01-09 14:02:54.757318476 +0000 UTC m=+1954.305157926" lastFinishedPulling="2026-01-09 14:02:59.228791249 +0000 UTC m=+1958.776630699" observedRunningTime="2026-01-09 14:02:59.903182155 +0000 UTC m=+1959.451021625" watchObservedRunningTime="2026-01-09 14:02:59.905186125 +0000 UTC m=+1959.453025605" Jan 09 14:03:33 crc kubenswrapper[4919]: I0109 14:03:33.041686 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-svt5n"] Jan 09 14:03:33 crc kubenswrapper[4919]: I0109 14:03:33.055905 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-svt5n"] Jan 09 14:03:34 crc kubenswrapper[4919]: I0109 14:03:34.765658 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c794cf2c-22d5-44dc-8bff-4bbdaca37867" path="/var/lib/kubelet/pods/c794cf2c-22d5-44dc-8bff-4bbdaca37867/volumes" Jan 09 14:03:37 crc kubenswrapper[4919]: I0109 14:03:37.215413 4919 generic.go:334] "Generic (PLEG): container finished" podID="d079d443-cf8c-47ff-96d9-a3fe59583ad8" containerID="f39e90b49280cdf08b7288bf4459e5390b008f3fa8da2b912b099cbf69fecd60" exitCode=0 Jan 09 14:03:37 crc kubenswrapper[4919]: I0109 14:03:37.215601 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" event={"ID":"d079d443-cf8c-47ff-96d9-a3fe59583ad8","Type":"ContainerDied","Data":"f39e90b49280cdf08b7288bf4459e5390b008f3fa8da2b912b099cbf69fecd60"} Jan 09 14:03:38 crc kubenswrapper[4919]: I0109 14:03:38.706082 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:03:38 crc kubenswrapper[4919]: I0109 14:03:38.873110 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-inventory\") pod \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " Jan 09 14:03:38 crc kubenswrapper[4919]: I0109 14:03:38.873293 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f727n\" (UniqueName: \"kubernetes.io/projected/d079d443-cf8c-47ff-96d9-a3fe59583ad8-kube-api-access-f727n\") pod \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " Jan 09 14:03:38 crc kubenswrapper[4919]: I0109 14:03:38.873444 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-ssh-key-openstack-edpm-ipam\") pod \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\" (UID: \"d079d443-cf8c-47ff-96d9-a3fe59583ad8\") " Jan 09 14:03:38 crc kubenswrapper[4919]: I0109 14:03:38.881868 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d079d443-cf8c-47ff-96d9-a3fe59583ad8-kube-api-access-f727n" (OuterVolumeSpecName: "kube-api-access-f727n") pod "d079d443-cf8c-47ff-96d9-a3fe59583ad8" (UID: "d079d443-cf8c-47ff-96d9-a3fe59583ad8"). InnerVolumeSpecName "kube-api-access-f727n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:03:38 crc kubenswrapper[4919]: I0109 14:03:38.902193 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-inventory" (OuterVolumeSpecName: "inventory") pod "d079d443-cf8c-47ff-96d9-a3fe59583ad8" (UID: "d079d443-cf8c-47ff-96d9-a3fe59583ad8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:03:38 crc kubenswrapper[4919]: I0109 14:03:38.904847 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d079d443-cf8c-47ff-96d9-a3fe59583ad8" (UID: "d079d443-cf8c-47ff-96d9-a3fe59583ad8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:03:38 crc kubenswrapper[4919]: I0109 14:03:38.976556 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:03:38 crc kubenswrapper[4919]: I0109 14:03:38.976886 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d079d443-cf8c-47ff-96d9-a3fe59583ad8-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:03:38 crc kubenswrapper[4919]: I0109 14:03:38.976900 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f727n\" (UniqueName: \"kubernetes.io/projected/d079d443-cf8c-47ff-96d9-a3fe59583ad8-kube-api-access-f727n\") on node \"crc\" DevicePath \"\"" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.246109 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" event={"ID":"d079d443-cf8c-47ff-96d9-a3fe59583ad8","Type":"ContainerDied","Data":"4eeb884f427ea6c839f9e7e7a6114e53dbb96325514ed212031a13c72f9b0e16"} Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.246512 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eeb884f427ea6c839f9e7e7a6114e53dbb96325514ed212031a13c72f9b0e16" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.246175 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b9fzw" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.398149 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76"] Jan 09 14:03:39 crc kubenswrapper[4919]: E0109 14:03:39.398576 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d079d443-cf8c-47ff-96d9-a3fe59583ad8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.398595 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d079d443-cf8c-47ff-96d9-a3fe59583ad8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.398823 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="d079d443-cf8c-47ff-96d9-a3fe59583ad8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.399589 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.407251 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.407489 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.407656 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.407798 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.428692 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76"] Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.503897 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vsw76\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.505632 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxl28\" (UniqueName: \"kubernetes.io/projected/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-kube-api-access-dxl28\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vsw76\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.505704 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vsw76\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.608396 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxl28\" (UniqueName: \"kubernetes.io/projected/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-kube-api-access-dxl28\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vsw76\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.608456 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vsw76\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.608575 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vsw76\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.616985 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vsw76\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.616993 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vsw76\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.628012 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxl28\" (UniqueName: \"kubernetes.io/projected/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-kube-api-access-dxl28\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vsw76\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:39 crc kubenswrapper[4919]: I0109 14:03:39.748837 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:03:40 crc kubenswrapper[4919]: I0109 14:03:40.287422 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76"] Jan 09 14:03:41 crc kubenswrapper[4919]: I0109 14:03:41.264819 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" event={"ID":"6dd14cc5-f2bf-43bc-b3e6-9704c2728708","Type":"ContainerStarted","Data":"6ac0c9e90d2fea7a09fbbdbcf8bdc02c54970c593d8b48300726a60711918215"} Jan 09 14:03:42 crc kubenswrapper[4919]: I0109 14:03:42.277728 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" event={"ID":"6dd14cc5-f2bf-43bc-b3e6-9704c2728708","Type":"ContainerStarted","Data":"0798e00f2bb5b307204434f1b47a7150ec6913b009912d9f5cf08bbb44ad30a1"} Jan 09 14:03:43 crc kubenswrapper[4919]: I0109 14:03:43.241441 4919 scope.go:117] "RemoveContainer" containerID="005d4ea544517843af0aab73e4af2c40eabbe0253e0c5814a62c2f1dff417817" Jan 09 14:03:43 crc kubenswrapper[4919]: I0109 14:03:43.260172 4919 scope.go:117] "RemoveContainer" containerID="680956d31dffa408279009281653daf482a7f7880c7fd4f97bdb24069dcfbc95" Jan 09 14:03:43 crc kubenswrapper[4919]: I0109 14:03:43.330693 4919 scope.go:117] "RemoveContainer" containerID="627a8fb06a34215f1dadde4f0886acbc69cbee6c4fdaeb8ec912e9fea22582c4" Jan 09 14:03:43 crc kubenswrapper[4919]: I0109 14:03:43.351173 4919 scope.go:117] "RemoveContainer" containerID="8dbf80694e07f9443f4d3aaf46a5ebefda2d8a6831f24a6f4281ac7e7957ce35" Jan 09 14:03:43 crc kubenswrapper[4919]: I0109 14:03:43.421172 4919 scope.go:117] "RemoveContainer" containerID="b9e683a0a8599be712538688894e252bc28eff4016b8ab99c536b7cb06635b68" Jan 09 14:03:43 crc kubenswrapper[4919]: I0109 14:03:43.461485 4919 scope.go:117] "RemoveContainer" containerID="6e0f9279898decc685ab38244f7be1df02fb16f6efa33382ef317559297c720c" Jan 09 14:04:21 crc kubenswrapper[4919]: I0109 14:04:21.246847 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:04:21 crc kubenswrapper[4919]: I0109 14:04:21.247453 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:04:30 crc kubenswrapper[4919]: I0109 14:04:30.688365 4919 generic.go:334] "Generic (PLEG): container finished" podID="6dd14cc5-f2bf-43bc-b3e6-9704c2728708" containerID="0798e00f2bb5b307204434f1b47a7150ec6913b009912d9f5cf08bbb44ad30a1" exitCode=0 Jan 09 14:04:30 crc kubenswrapper[4919]: I0109 14:04:30.688412 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" event={"ID":"6dd14cc5-f2bf-43bc-b3e6-9704c2728708","Type":"ContainerDied","Data":"0798e00f2bb5b307204434f1b47a7150ec6913b009912d9f5cf08bbb44ad30a1"} Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.116076 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.197829 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxl28\" (UniqueName: \"kubernetes.io/projected/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-kube-api-access-dxl28\") pod \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.197942 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-inventory\") pod \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.198257 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-ssh-key-openstack-edpm-ipam\") pod \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\" (UID: \"6dd14cc5-f2bf-43bc-b3e6-9704c2728708\") " Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.203261 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-kube-api-access-dxl28" (OuterVolumeSpecName: "kube-api-access-dxl28") pod "6dd14cc5-f2bf-43bc-b3e6-9704c2728708" (UID: "6dd14cc5-f2bf-43bc-b3e6-9704c2728708"). InnerVolumeSpecName "kube-api-access-dxl28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.227269 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6dd14cc5-f2bf-43bc-b3e6-9704c2728708" (UID: "6dd14cc5-f2bf-43bc-b3e6-9704c2728708"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.230991 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-inventory" (OuterVolumeSpecName: "inventory") pod "6dd14cc5-f2bf-43bc-b3e6-9704c2728708" (UID: "6dd14cc5-f2bf-43bc-b3e6-9704c2728708"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.300867 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxl28\" (UniqueName: \"kubernetes.io/projected/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-kube-api-access-dxl28\") on node \"crc\" DevicePath \"\"" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.300906 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.300919 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dd14cc5-f2bf-43bc-b3e6-9704c2728708-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.714981 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" event={"ID":"6dd14cc5-f2bf-43bc-b3e6-9704c2728708","Type":"ContainerDied","Data":"6ac0c9e90d2fea7a09fbbdbcf8bdc02c54970c593d8b48300726a60711918215"} Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.715035 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ac0c9e90d2fea7a09fbbdbcf8bdc02c54970c593d8b48300726a60711918215" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.715045 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vsw76" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.798651 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jvwq2"] Jan 09 14:04:32 crc kubenswrapper[4919]: E0109 14:04:32.799156 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd14cc5-f2bf-43bc-b3e6-9704c2728708" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.799183 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd14cc5-f2bf-43bc-b3e6-9704c2728708" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.807931 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd14cc5-f2bf-43bc-b3e6-9704c2728708" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.808970 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.809959 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jvwq2"] Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.810925 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.811135 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.811959 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.812107 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.912828 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jvwq2\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.913402 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jvwq2\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:32 crc kubenswrapper[4919]: I0109 14:04:32.913473 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt4jp\" (UniqueName: \"kubernetes.io/projected/a7fb05e2-9059-4447-8ed5-f125411a7fdc-kube-api-access-dt4jp\") pod \"ssh-known-hosts-edpm-deployment-jvwq2\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:33 crc kubenswrapper[4919]: I0109 14:04:33.017396 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jvwq2\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:33 crc kubenswrapper[4919]: I0109 14:04:33.017726 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt4jp\" (UniqueName: \"kubernetes.io/projected/a7fb05e2-9059-4447-8ed5-f125411a7fdc-kube-api-access-dt4jp\") pod \"ssh-known-hosts-edpm-deployment-jvwq2\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:33 crc kubenswrapper[4919]: I0109 14:04:33.017959 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jvwq2\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:33 crc kubenswrapper[4919]: I0109 14:04:33.021610 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jvwq2\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:33 crc kubenswrapper[4919]: I0109 14:04:33.021962 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jvwq2\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:33 crc kubenswrapper[4919]: I0109 14:04:33.042379 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt4jp\" (UniqueName: \"kubernetes.io/projected/a7fb05e2-9059-4447-8ed5-f125411a7fdc-kube-api-access-dt4jp\") pod \"ssh-known-hosts-edpm-deployment-jvwq2\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:33 crc kubenswrapper[4919]: I0109 14:04:33.172714 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:33 crc kubenswrapper[4919]: I0109 14:04:33.686693 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jvwq2"] Jan 09 14:04:33 crc kubenswrapper[4919]: I0109 14:04:33.726319 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" event={"ID":"a7fb05e2-9059-4447-8ed5-f125411a7fdc","Type":"ContainerStarted","Data":"36b7917ce4357c831a46b2bac2e4c85c367cc3c7be2e41538d7b13b09021634d"} Jan 09 14:04:35 crc kubenswrapper[4919]: I0109 14:04:35.744579 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" event={"ID":"a7fb05e2-9059-4447-8ed5-f125411a7fdc","Type":"ContainerStarted","Data":"b073f0d0b45fde462a3452a0cffe99dea35c07ff8aadb1bd67cb7c79aeb436c4"} Jan 09 14:04:35 crc kubenswrapper[4919]: I0109 14:04:35.773279 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" podStartSLOduration=2.446318723 podStartE2EDuration="3.773254126s" podCreationTimestamp="2026-01-09 14:04:32 +0000 UTC" firstStartedPulling="2026-01-09 14:04:33.698489392 +0000 UTC m=+2053.246328832" lastFinishedPulling="2026-01-09 14:04:35.025424785 +0000 UTC m=+2054.573264235" observedRunningTime="2026-01-09 14:04:35.76456961 +0000 UTC m=+2055.312409080" watchObservedRunningTime="2026-01-09 14:04:35.773254126 +0000 UTC m=+2055.321093586" Jan 09 14:04:42 crc kubenswrapper[4919]: I0109 14:04:42.811530 4919 generic.go:334] "Generic (PLEG): container finished" podID="a7fb05e2-9059-4447-8ed5-f125411a7fdc" containerID="b073f0d0b45fde462a3452a0cffe99dea35c07ff8aadb1bd67cb7c79aeb436c4" exitCode=0 Jan 09 14:04:42 crc kubenswrapper[4919]: I0109 14:04:42.811651 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" event={"ID":"a7fb05e2-9059-4447-8ed5-f125411a7fdc","Type":"ContainerDied","Data":"b073f0d0b45fde462a3452a0cffe99dea35c07ff8aadb1bd67cb7c79aeb436c4"} Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.239042 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.337637 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt4jp\" (UniqueName: \"kubernetes.io/projected/a7fb05e2-9059-4447-8ed5-f125411a7fdc-kube-api-access-dt4jp\") pod \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.337946 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-ssh-key-openstack-edpm-ipam\") pod \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.338038 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-inventory-0\") pod \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\" (UID: \"a7fb05e2-9059-4447-8ed5-f125411a7fdc\") " Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.343527 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7fb05e2-9059-4447-8ed5-f125411a7fdc-kube-api-access-dt4jp" (OuterVolumeSpecName: "kube-api-access-dt4jp") pod "a7fb05e2-9059-4447-8ed5-f125411a7fdc" (UID: "a7fb05e2-9059-4447-8ed5-f125411a7fdc"). InnerVolumeSpecName "kube-api-access-dt4jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.364564 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "a7fb05e2-9059-4447-8ed5-f125411a7fdc" (UID: "a7fb05e2-9059-4447-8ed5-f125411a7fdc"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.371638 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a7fb05e2-9059-4447-8ed5-f125411a7fdc" (UID: "a7fb05e2-9059-4447-8ed5-f125411a7fdc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.440258 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt4jp\" (UniqueName: \"kubernetes.io/projected/a7fb05e2-9059-4447-8ed5-f125411a7fdc-kube-api-access-dt4jp\") on node \"crc\" DevicePath \"\"" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.440293 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.440305 4919 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a7fb05e2-9059-4447-8ed5-f125411a7fdc-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.830906 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" event={"ID":"a7fb05e2-9059-4447-8ed5-f125411a7fdc","Type":"ContainerDied","Data":"36b7917ce4357c831a46b2bac2e4c85c367cc3c7be2e41538d7b13b09021634d"} Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.830948 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b7917ce4357c831a46b2bac2e4c85c367cc3c7be2e41538d7b13b09021634d" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.830998 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jvwq2" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.930260 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6"] Jan 09 14:04:44 crc kubenswrapper[4919]: E0109 14:04:44.930947 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fb05e2-9059-4447-8ed5-f125411a7fdc" containerName="ssh-known-hosts-edpm-deployment" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.930964 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fb05e2-9059-4447-8ed5-f125411a7fdc" containerName="ssh-known-hosts-edpm-deployment" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.931135 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fb05e2-9059-4447-8ed5-f125411a7fdc" containerName="ssh-known-hosts-edpm-deployment" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.931782 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.936103 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.936448 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.936567 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.936620 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.939547 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6"] Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.949371 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-crhq6\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.949509 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw8wd\" (UniqueName: \"kubernetes.io/projected/1e8137a4-0169-4f73-b616-6a0554aa426f-kube-api-access-sw8wd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-crhq6\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:44 crc kubenswrapper[4919]: I0109 14:04:44.949700 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-crhq6\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:45 crc kubenswrapper[4919]: I0109 14:04:45.052491 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-crhq6\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:45 crc kubenswrapper[4919]: I0109 14:04:45.052598 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-crhq6\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:45 crc kubenswrapper[4919]: I0109 14:04:45.052739 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw8wd\" (UniqueName: \"kubernetes.io/projected/1e8137a4-0169-4f73-b616-6a0554aa426f-kube-api-access-sw8wd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-crhq6\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:45 crc kubenswrapper[4919]: I0109 14:04:45.058200 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-crhq6\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:45 crc kubenswrapper[4919]: I0109 14:04:45.059101 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-crhq6\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:45 crc kubenswrapper[4919]: I0109 14:04:45.070787 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw8wd\" (UniqueName: \"kubernetes.io/projected/1e8137a4-0169-4f73-b616-6a0554aa426f-kube-api-access-sw8wd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-crhq6\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:45 crc kubenswrapper[4919]: I0109 14:04:45.248807 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:45 crc kubenswrapper[4919]: I0109 14:04:45.822876 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6"] Jan 09 14:04:45 crc kubenswrapper[4919]: I0109 14:04:45.850082 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" event={"ID":"1e8137a4-0169-4f73-b616-6a0554aa426f","Type":"ContainerStarted","Data":"cb1f198d36660d90d38cf980ba2c921b5fbc52934c33b4b9ac461ed5a6350031"} Jan 09 14:04:46 crc kubenswrapper[4919]: I0109 14:04:46.861929 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" event={"ID":"1e8137a4-0169-4f73-b616-6a0554aa426f","Type":"ContainerStarted","Data":"7fc1508bc7ce6da97e3d10a023784231009c5645c6a2f97edefe50b28196b72d"} Jan 09 14:04:46 crc kubenswrapper[4919]: I0109 14:04:46.901058 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" podStartSLOduration=2.272032728 podStartE2EDuration="2.901028569s" podCreationTimestamp="2026-01-09 14:04:44 +0000 UTC" firstStartedPulling="2026-01-09 14:04:45.827750025 +0000 UTC m=+2065.375589475" lastFinishedPulling="2026-01-09 14:04:46.456745876 +0000 UTC m=+2066.004585316" observedRunningTime="2026-01-09 14:04:46.881912905 +0000 UTC m=+2066.429752355" watchObservedRunningTime="2026-01-09 14:04:46.901028569 +0000 UTC m=+2066.448868019" Jan 09 14:04:51 crc kubenswrapper[4919]: I0109 14:04:51.247620 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:04:51 crc kubenswrapper[4919]: I0109 14:04:51.248201 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:04:54 crc kubenswrapper[4919]: I0109 14:04:54.935915 4919 generic.go:334] "Generic (PLEG): container finished" podID="1e8137a4-0169-4f73-b616-6a0554aa426f" containerID="7fc1508bc7ce6da97e3d10a023784231009c5645c6a2f97edefe50b28196b72d" exitCode=0 Jan 09 14:04:54 crc kubenswrapper[4919]: I0109 14:04:54.935998 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" event={"ID":"1e8137a4-0169-4f73-b616-6a0554aa426f","Type":"ContainerDied","Data":"7fc1508bc7ce6da97e3d10a023784231009c5645c6a2f97edefe50b28196b72d"} Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.394235 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.478454 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-ssh-key-openstack-edpm-ipam\") pod \"1e8137a4-0169-4f73-b616-6a0554aa426f\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.478916 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-inventory\") pod \"1e8137a4-0169-4f73-b616-6a0554aa426f\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.479042 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw8wd\" (UniqueName: \"kubernetes.io/projected/1e8137a4-0169-4f73-b616-6a0554aa426f-kube-api-access-sw8wd\") pod \"1e8137a4-0169-4f73-b616-6a0554aa426f\" (UID: \"1e8137a4-0169-4f73-b616-6a0554aa426f\") " Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.492750 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e8137a4-0169-4f73-b616-6a0554aa426f-kube-api-access-sw8wd" (OuterVolumeSpecName: "kube-api-access-sw8wd") pod "1e8137a4-0169-4f73-b616-6a0554aa426f" (UID: "1e8137a4-0169-4f73-b616-6a0554aa426f"). InnerVolumeSpecName "kube-api-access-sw8wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.509449 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1e8137a4-0169-4f73-b616-6a0554aa426f" (UID: "1e8137a4-0169-4f73-b616-6a0554aa426f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.516926 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-inventory" (OuterVolumeSpecName: "inventory") pod "1e8137a4-0169-4f73-b616-6a0554aa426f" (UID: "1e8137a4-0169-4f73-b616-6a0554aa426f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.582107 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.582144 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e8137a4-0169-4f73-b616-6a0554aa426f-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.582153 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw8wd\" (UniqueName: \"kubernetes.io/projected/1e8137a4-0169-4f73-b616-6a0554aa426f-kube-api-access-sw8wd\") on node \"crc\" DevicePath \"\"" Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.953660 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" event={"ID":"1e8137a4-0169-4f73-b616-6a0554aa426f","Type":"ContainerDied","Data":"cb1f198d36660d90d38cf980ba2c921b5fbc52934c33b4b9ac461ed5a6350031"} Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.953706 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1f198d36660d90d38cf980ba2c921b5fbc52934c33b4b9ac461ed5a6350031" Jan 09 14:04:56 crc kubenswrapper[4919]: I0109 14:04:56.953710 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-crhq6" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.027253 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8"] Jan 09 14:04:57 crc kubenswrapper[4919]: E0109 14:04:57.027963 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8137a4-0169-4f73-b616-6a0554aa426f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.027988 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8137a4-0169-4f73-b616-6a0554aa426f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.028279 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e8137a4-0169-4f73-b616-6a0554aa426f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.028944 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.038904 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.040354 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.040553 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.040765 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.045611 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8"] Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.093151 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdwz\" (UniqueName: \"kubernetes.io/projected/781cfeb4-857a-490b-a97e-02bcadab1886-kube-api-access-5wdwz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.093517 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.093611 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.195096 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdwz\" (UniqueName: \"kubernetes.io/projected/781cfeb4-857a-490b-a97e-02bcadab1886-kube-api-access-5wdwz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.195254 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.195293 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.201111 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.203193 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.210719 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdwz\" (UniqueName: \"kubernetes.io/projected/781cfeb4-857a-490b-a97e-02bcadab1886-kube-api-access-5wdwz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.359104 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.871081 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8"] Jan 09 14:04:57 crc kubenswrapper[4919]: I0109 14:04:57.964284 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" event={"ID":"781cfeb4-857a-490b-a97e-02bcadab1886","Type":"ContainerStarted","Data":"8dbbc78de5f320b709ec850865978c76f66b80bc54a2b7f164d7576d39870a47"} Jan 09 14:04:58 crc kubenswrapper[4919]: I0109 14:04:58.974517 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" event={"ID":"781cfeb4-857a-490b-a97e-02bcadab1886","Type":"ContainerStarted","Data":"7660de884810377aff240a28de32aefc0ecfc84598f30f130f9e3c62cfc84165"} Jan 09 14:04:58 crc kubenswrapper[4919]: I0109 14:04:58.992305 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" podStartSLOduration=1.18662475 podStartE2EDuration="1.992279707s" podCreationTimestamp="2026-01-09 14:04:57 +0000 UTC" firstStartedPulling="2026-01-09 14:04:57.877816091 +0000 UTC m=+2077.425655541" lastFinishedPulling="2026-01-09 14:04:58.683471048 +0000 UTC m=+2078.231310498" observedRunningTime="2026-01-09 14:04:58.98879613 +0000 UTC m=+2078.536635600" watchObservedRunningTime="2026-01-09 14:04:58.992279707 +0000 UTC m=+2078.540119157" Jan 09 14:05:09 crc kubenswrapper[4919]: I0109 14:05:09.058915 4919 generic.go:334] "Generic (PLEG): container finished" podID="781cfeb4-857a-490b-a97e-02bcadab1886" containerID="7660de884810377aff240a28de32aefc0ecfc84598f30f130f9e3c62cfc84165" exitCode=0 Jan 09 14:05:09 crc kubenswrapper[4919]: I0109 14:05:09.058983 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" event={"ID":"781cfeb4-857a-490b-a97e-02bcadab1886","Type":"ContainerDied","Data":"7660de884810377aff240a28de32aefc0ecfc84598f30f130f9e3c62cfc84165"} Jan 09 14:05:10 crc kubenswrapper[4919]: I0109 14:05:10.470200 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:05:10 crc kubenswrapper[4919]: I0109 14:05:10.590805 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wdwz\" (UniqueName: \"kubernetes.io/projected/781cfeb4-857a-490b-a97e-02bcadab1886-kube-api-access-5wdwz\") pod \"781cfeb4-857a-490b-a97e-02bcadab1886\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " Jan 09 14:05:10 crc kubenswrapper[4919]: I0109 14:05:10.591059 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-inventory\") pod \"781cfeb4-857a-490b-a97e-02bcadab1886\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " Jan 09 14:05:10 crc kubenswrapper[4919]: I0109 14:05:10.591140 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-ssh-key-openstack-edpm-ipam\") pod \"781cfeb4-857a-490b-a97e-02bcadab1886\" (UID: \"781cfeb4-857a-490b-a97e-02bcadab1886\") " Jan 09 14:05:10 crc kubenswrapper[4919]: I0109 14:05:10.596471 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781cfeb4-857a-490b-a97e-02bcadab1886-kube-api-access-5wdwz" (OuterVolumeSpecName: "kube-api-access-5wdwz") pod "781cfeb4-857a-490b-a97e-02bcadab1886" (UID: "781cfeb4-857a-490b-a97e-02bcadab1886"). InnerVolumeSpecName "kube-api-access-5wdwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:05:10 crc kubenswrapper[4919]: I0109 14:05:10.618043 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-inventory" (OuterVolumeSpecName: "inventory") pod "781cfeb4-857a-490b-a97e-02bcadab1886" (UID: "781cfeb4-857a-490b-a97e-02bcadab1886"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:10 crc kubenswrapper[4919]: I0109 14:05:10.624447 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "781cfeb4-857a-490b-a97e-02bcadab1886" (UID: "781cfeb4-857a-490b-a97e-02bcadab1886"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:10 crc kubenswrapper[4919]: I0109 14:05:10.692983 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:10 crc kubenswrapper[4919]: I0109 14:05:10.693019 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/781cfeb4-857a-490b-a97e-02bcadab1886-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:10 crc kubenswrapper[4919]: I0109 14:05:10.693032 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wdwz\" (UniqueName: \"kubernetes.io/projected/781cfeb4-857a-490b-a97e-02bcadab1886-kube-api-access-5wdwz\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.122185 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" event={"ID":"781cfeb4-857a-490b-a97e-02bcadab1886","Type":"ContainerDied","Data":"8dbbc78de5f320b709ec850865978c76f66b80bc54a2b7f164d7576d39870a47"} Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.122259 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dbbc78de5f320b709ec850865978c76f66b80bc54a2b7f164d7576d39870a47" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.122424 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.248827 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt"] Jan 09 14:05:11 crc kubenswrapper[4919]: E0109 14:05:11.249346 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781cfeb4-857a-490b-a97e-02bcadab1886" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.249369 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="781cfeb4-857a-490b-a97e-02bcadab1886" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.249599 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="781cfeb4-857a-490b-a97e-02bcadab1886" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.250443 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.252352 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.252589 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.253016 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.253091 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.253104 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.253104 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.253427 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.254796 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.273253 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt"] Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.409308 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.409646 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.409683 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.409711 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.409749 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.409846 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.409957 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.409984 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.410054 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.410094 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpd6l\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-kube-api-access-tpd6l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.410113 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.410161 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.410312 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.410422 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512261 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512339 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512364 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512404 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512433 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpd6l\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-kube-api-access-tpd6l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512455 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512490 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512516 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512556 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512619 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512645 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512674 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512701 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.512734 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.519738 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.519927 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.521445 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.521942 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.522646 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.522708 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.522714 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.523424 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.523470 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.523846 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.528908 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.529133 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.531948 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.547376 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpd6l\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-kube-api-access-tpd6l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:11 crc kubenswrapper[4919]: I0109 14:05:11.570087 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:12 crc kubenswrapper[4919]: I0109 14:05:12.114436 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt"] Jan 09 14:05:12 crc kubenswrapper[4919]: I0109 14:05:12.133702 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" event={"ID":"f7e5dde7-0e67-4c31-83c6-9946c5b23755","Type":"ContainerStarted","Data":"a09d4c1d1f0ee2fffa744dfc6a376ce883b797bbd9b66acea28b552f13c07621"} Jan 09 14:05:14 crc kubenswrapper[4919]: I0109 14:05:14.150930 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" event={"ID":"f7e5dde7-0e67-4c31-83c6-9946c5b23755","Type":"ContainerStarted","Data":"24b320ffedaf17c7d4e0e3acc8bff03354ffc5299356058c384db5b0c4ec10a1"} Jan 09 14:05:14 crc kubenswrapper[4919]: I0109 14:05:14.177349 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" podStartSLOduration=1.7714410809999999 podStartE2EDuration="3.177327654s" podCreationTimestamp="2026-01-09 14:05:11 +0000 UTC" firstStartedPulling="2026-01-09 14:05:12.115522532 +0000 UTC m=+2091.663361982" lastFinishedPulling="2026-01-09 14:05:13.521409105 +0000 UTC m=+2093.069248555" observedRunningTime="2026-01-09 14:05:14.17274517 +0000 UTC m=+2093.720584620" watchObservedRunningTime="2026-01-09 14:05:14.177327654 +0000 UTC m=+2093.725167104" Jan 09 14:05:18 crc kubenswrapper[4919]: I0109 14:05:18.884345 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hlt6t"] Jan 09 14:05:18 crc kubenswrapper[4919]: I0109 14:05:18.888648 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:18 crc kubenswrapper[4919]: I0109 14:05:18.897318 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hlt6t"] Jan 09 14:05:18 crc kubenswrapper[4919]: I0109 14:05:18.990303 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-catalog-content\") pod \"redhat-operators-hlt6t\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:18 crc kubenswrapper[4919]: I0109 14:05:18.990386 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59zzb\" (UniqueName: \"kubernetes.io/projected/156afc75-9ca2-4713-80c2-231846140164-kube-api-access-59zzb\") pod \"redhat-operators-hlt6t\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:18 crc kubenswrapper[4919]: I0109 14:05:18.990409 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-utilities\") pod \"redhat-operators-hlt6t\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:19 crc kubenswrapper[4919]: I0109 14:05:19.093804 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-catalog-content\") pod \"redhat-operators-hlt6t\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:19 crc kubenswrapper[4919]: I0109 14:05:19.093891 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59zzb\" (UniqueName: \"kubernetes.io/projected/156afc75-9ca2-4713-80c2-231846140164-kube-api-access-59zzb\") pod \"redhat-operators-hlt6t\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:19 crc kubenswrapper[4919]: I0109 14:05:19.093922 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-utilities\") pod \"redhat-operators-hlt6t\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:19 crc kubenswrapper[4919]: I0109 14:05:19.094404 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-catalog-content\") pod \"redhat-operators-hlt6t\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:19 crc kubenswrapper[4919]: I0109 14:05:19.094427 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-utilities\") pod \"redhat-operators-hlt6t\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:19 crc kubenswrapper[4919]: I0109 14:05:19.118273 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59zzb\" (UniqueName: \"kubernetes.io/projected/156afc75-9ca2-4713-80c2-231846140164-kube-api-access-59zzb\") pod \"redhat-operators-hlt6t\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:19 crc kubenswrapper[4919]: I0109 14:05:19.214173 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:19 crc kubenswrapper[4919]: I0109 14:05:19.706904 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hlt6t"] Jan 09 14:05:20 crc kubenswrapper[4919]: I0109 14:05:20.203821 4919 generic.go:334] "Generic (PLEG): container finished" podID="156afc75-9ca2-4713-80c2-231846140164" containerID="cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9" exitCode=0 Jan 09 14:05:20 crc kubenswrapper[4919]: I0109 14:05:20.204861 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlt6t" event={"ID":"156afc75-9ca2-4713-80c2-231846140164","Type":"ContainerDied","Data":"cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9"} Jan 09 14:05:20 crc kubenswrapper[4919]: I0109 14:05:20.204969 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlt6t" event={"ID":"156afc75-9ca2-4713-80c2-231846140164","Type":"ContainerStarted","Data":"4790147fe3c7e94e9fd84261a84de8375d15ee418458e45cfc876528e46c02f3"} Jan 09 14:05:21 crc kubenswrapper[4919]: I0109 14:05:21.246722 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:05:21 crc kubenswrapper[4919]: I0109 14:05:21.247293 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:05:21 crc kubenswrapper[4919]: I0109 14:05:21.247481 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 14:05:21 crc kubenswrapper[4919]: I0109 14:05:21.248324 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08b1c11299df27ad98a3ca953b44e2744c53ddd036341f81b00480965189197d"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 14:05:21 crc kubenswrapper[4919]: I0109 14:05:21.248383 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://08b1c11299df27ad98a3ca953b44e2744c53ddd036341f81b00480965189197d" gracePeriod=600 Jan 09 14:05:22 crc kubenswrapper[4919]: I0109 14:05:22.222686 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="08b1c11299df27ad98a3ca953b44e2744c53ddd036341f81b00480965189197d" exitCode=0 Jan 09 14:05:22 crc kubenswrapper[4919]: I0109 14:05:22.222756 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"08b1c11299df27ad98a3ca953b44e2744c53ddd036341f81b00480965189197d"} Jan 09 14:05:22 crc kubenswrapper[4919]: I0109 14:05:22.223467 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794"} Jan 09 14:05:22 crc kubenswrapper[4919]: I0109 14:05:22.223494 4919 scope.go:117] "RemoveContainer" containerID="97d6f4470740f89d0ab801450db4dd244e16b55cda5a62a0b5edd2cd276ca373" Jan 09 14:05:23 crc kubenswrapper[4919]: I0109 14:05:23.233959 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlt6t" event={"ID":"156afc75-9ca2-4713-80c2-231846140164","Type":"ContainerStarted","Data":"340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969"} Jan 09 14:05:25 crc kubenswrapper[4919]: I0109 14:05:25.256584 4919 generic.go:334] "Generic (PLEG): container finished" podID="156afc75-9ca2-4713-80c2-231846140164" containerID="340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969" exitCode=0 Jan 09 14:05:25 crc kubenswrapper[4919]: I0109 14:05:25.256677 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlt6t" event={"ID":"156afc75-9ca2-4713-80c2-231846140164","Type":"ContainerDied","Data":"340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969"} Jan 09 14:05:27 crc kubenswrapper[4919]: I0109 14:05:27.278458 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlt6t" event={"ID":"156afc75-9ca2-4713-80c2-231846140164","Type":"ContainerStarted","Data":"3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a"} Jan 09 14:05:27 crc kubenswrapper[4919]: I0109 14:05:27.300707 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hlt6t" podStartSLOduration=3.442106643 podStartE2EDuration="9.300680164s" podCreationTimestamp="2026-01-09 14:05:18 +0000 UTC" firstStartedPulling="2026-01-09 14:05:20.207067773 +0000 UTC m=+2099.754907223" lastFinishedPulling="2026-01-09 14:05:26.065641294 +0000 UTC m=+2105.613480744" observedRunningTime="2026-01-09 14:05:27.297479014 +0000 UTC m=+2106.845318464" watchObservedRunningTime="2026-01-09 14:05:27.300680164 +0000 UTC m=+2106.848519614" Jan 09 14:05:29 crc kubenswrapper[4919]: I0109 14:05:29.214313 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:29 crc kubenswrapper[4919]: I0109 14:05:29.214679 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:30 crc kubenswrapper[4919]: I0109 14:05:30.257999 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hlt6t" podUID="156afc75-9ca2-4713-80c2-231846140164" containerName="registry-server" probeResult="failure" output=< Jan 09 14:05:30 crc kubenswrapper[4919]: timeout: failed to connect service ":50051" within 1s Jan 09 14:05:30 crc kubenswrapper[4919]: > Jan 09 14:05:39 crc kubenswrapper[4919]: I0109 14:05:39.273994 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:39 crc kubenswrapper[4919]: I0109 14:05:39.329631 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:39 crc kubenswrapper[4919]: I0109 14:05:39.549675 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hlt6t"] Jan 09 14:05:40 crc kubenswrapper[4919]: I0109 14:05:40.401712 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hlt6t" podUID="156afc75-9ca2-4713-80c2-231846140164" containerName="registry-server" containerID="cri-o://3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a" gracePeriod=2 Jan 09 14:05:40 crc kubenswrapper[4919]: I0109 14:05:40.933004 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.022927 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-utilities\") pod \"156afc75-9ca2-4713-80c2-231846140164\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.023025 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59zzb\" (UniqueName: \"kubernetes.io/projected/156afc75-9ca2-4713-80c2-231846140164-kube-api-access-59zzb\") pod \"156afc75-9ca2-4713-80c2-231846140164\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.023059 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-catalog-content\") pod \"156afc75-9ca2-4713-80c2-231846140164\" (UID: \"156afc75-9ca2-4713-80c2-231846140164\") " Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.024051 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-utilities" (OuterVolumeSpecName: "utilities") pod "156afc75-9ca2-4713-80c2-231846140164" (UID: "156afc75-9ca2-4713-80c2-231846140164"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.029126 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156afc75-9ca2-4713-80c2-231846140164-kube-api-access-59zzb" (OuterVolumeSpecName: "kube-api-access-59zzb") pod "156afc75-9ca2-4713-80c2-231846140164" (UID: "156afc75-9ca2-4713-80c2-231846140164"). InnerVolumeSpecName "kube-api-access-59zzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.125557 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.125592 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59zzb\" (UniqueName: \"kubernetes.io/projected/156afc75-9ca2-4713-80c2-231846140164-kube-api-access-59zzb\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.148014 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "156afc75-9ca2-4713-80c2-231846140164" (UID: "156afc75-9ca2-4713-80c2-231846140164"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.227632 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156afc75-9ca2-4713-80c2-231846140164-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.414879 4919 generic.go:334] "Generic (PLEG): container finished" podID="156afc75-9ca2-4713-80c2-231846140164" containerID="3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a" exitCode=0 Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.414931 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlt6t" event={"ID":"156afc75-9ca2-4713-80c2-231846140164","Type":"ContainerDied","Data":"3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a"} Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.414960 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlt6t" event={"ID":"156afc75-9ca2-4713-80c2-231846140164","Type":"ContainerDied","Data":"4790147fe3c7e94e9fd84261a84de8375d15ee418458e45cfc876528e46c02f3"} Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.414981 4919 scope.go:117] "RemoveContainer" containerID="3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.414995 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlt6t" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.438131 4919 scope.go:117] "RemoveContainer" containerID="340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.455370 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hlt6t"] Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.464228 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hlt6t"] Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.466077 4919 scope.go:117] "RemoveContainer" containerID="cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.516639 4919 scope.go:117] "RemoveContainer" containerID="3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a" Jan 09 14:05:41 crc kubenswrapper[4919]: E0109 14:05:41.517165 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a\": container with ID starting with 3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a not found: ID does not exist" containerID="3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.517202 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a"} err="failed to get container status \"3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a\": rpc error: code = NotFound desc = could not find container \"3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a\": container with ID starting with 3bdae803a6b5f3aaaccb1fe1e3cf25da8abb061b991e984d69e5ce7939a8f25a not found: ID does not exist" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.517235 4919 scope.go:117] "RemoveContainer" containerID="340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969" Jan 09 14:05:41 crc kubenswrapper[4919]: E0109 14:05:41.517602 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969\": container with ID starting with 340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969 not found: ID does not exist" containerID="340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.517644 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969"} err="failed to get container status \"340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969\": rpc error: code = NotFound desc = could not find container \"340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969\": container with ID starting with 340b32baa9809d061e98d4ab3a210f57709ab9aa016b0e6e24a3df68aaade969 not found: ID does not exist" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.517674 4919 scope.go:117] "RemoveContainer" containerID="cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9" Jan 09 14:05:41 crc kubenswrapper[4919]: E0109 14:05:41.518308 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9\": container with ID starting with cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9 not found: ID does not exist" containerID="cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9" Jan 09 14:05:41 crc kubenswrapper[4919]: I0109 14:05:41.518331 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9"} err="failed to get container status \"cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9\": rpc error: code = NotFound desc = could not find container \"cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9\": container with ID starting with cf1b3213db5cc9b05470a2213877394d2c1d8210be2a18dd5545d7e4040278f9 not found: ID does not exist" Jan 09 14:05:42 crc kubenswrapper[4919]: I0109 14:05:42.761979 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="156afc75-9ca2-4713-80c2-231846140164" path="/var/lib/kubelet/pods/156afc75-9ca2-4713-80c2-231846140164/volumes" Jan 09 14:05:51 crc kubenswrapper[4919]: I0109 14:05:51.497901 4919 generic.go:334] "Generic (PLEG): container finished" podID="f7e5dde7-0e67-4c31-83c6-9946c5b23755" containerID="24b320ffedaf17c7d4e0e3acc8bff03354ffc5299356058c384db5b0c4ec10a1" exitCode=0 Jan 09 14:05:51 crc kubenswrapper[4919]: I0109 14:05:51.497995 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" event={"ID":"f7e5dde7-0e67-4c31-83c6-9946c5b23755","Type":"ContainerDied","Data":"24b320ffedaf17c7d4e0e3acc8bff03354ffc5299356058c384db5b0c4ec10a1"} Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.906598 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.986926 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-libvirt-combined-ca-bundle\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987121 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987183 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-nova-combined-ca-bundle\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987274 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987334 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ovn-combined-ca-bundle\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987381 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-bootstrap-combined-ca-bundle\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987420 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpd6l\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-kube-api-access-tpd6l\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987465 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-telemetry-combined-ca-bundle\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987551 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987606 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-inventory\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987672 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987699 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ssh-key-openstack-edpm-ipam\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987723 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-repo-setup-combined-ca-bundle\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.987763 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-neutron-metadata-combined-ca-bundle\") pod \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\" (UID: \"f7e5dde7-0e67-4c31-83c6-9946c5b23755\") " Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.993538 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.994018 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.994523 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.994987 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-kube-api-access-tpd6l" (OuterVolumeSpecName: "kube-api-access-tpd6l") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "kube-api-access-tpd6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.995220 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:52 crc kubenswrapper[4919]: I0109 14:05:52.995312 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.006893 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.006926 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.007020 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.007043 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.007025 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.007443 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.024431 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.028513 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-inventory" (OuterVolumeSpecName: "inventory") pod "f7e5dde7-0e67-4c31-83c6-9946c5b23755" (UID: "f7e5dde7-0e67-4c31-83c6-9946c5b23755"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090446 4919 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090481 4919 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090493 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpd6l\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-kube-api-access-tpd6l\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090502 4919 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090512 4919 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090523 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090532 4919 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090541 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090551 4919 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090559 4919 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090568 4919 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090578 4919 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090590 4919 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e5dde7-0e67-4c31-83c6-9946c5b23755-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.090600 4919 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f7e5dde7-0e67-4c31-83c6-9946c5b23755-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.517921 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" event={"ID":"f7e5dde7-0e67-4c31-83c6-9946c5b23755","Type":"ContainerDied","Data":"a09d4c1d1f0ee2fffa744dfc6a376ce883b797bbd9b66acea28b552f13c07621"} Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.517966 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a09d4c1d1f0ee2fffa744dfc6a376ce883b797bbd9b66acea28b552f13c07621" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.518004 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.620806 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44"] Jan 09 14:05:53 crc kubenswrapper[4919]: E0109 14:05:53.621383 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156afc75-9ca2-4713-80c2-231846140164" containerName="registry-server" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.621406 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="156afc75-9ca2-4713-80c2-231846140164" containerName="registry-server" Jan 09 14:05:53 crc kubenswrapper[4919]: E0109 14:05:53.621438 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156afc75-9ca2-4713-80c2-231846140164" containerName="extract-utilities" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.621447 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="156afc75-9ca2-4713-80c2-231846140164" containerName="extract-utilities" Jan 09 14:05:53 crc kubenswrapper[4919]: E0109 14:05:53.621461 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156afc75-9ca2-4713-80c2-231846140164" containerName="extract-content" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.621467 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="156afc75-9ca2-4713-80c2-231846140164" containerName="extract-content" Jan 09 14:05:53 crc kubenswrapper[4919]: E0109 14:05:53.621484 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e5dde7-0e67-4c31-83c6-9946c5b23755" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.621492 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e5dde7-0e67-4c31-83c6-9946c5b23755" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.621698 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e5dde7-0e67-4c31-83c6-9946c5b23755" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.621717 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="156afc75-9ca2-4713-80c2-231846140164" containerName="registry-server" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.622341 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.624146 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.625863 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.626048 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.626477 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.628702 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.640077 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44"] Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.704430 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.704508 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.704544 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkrb\" (UniqueName: \"kubernetes.io/projected/527824ae-c763-4efc-ba39-1cd36664996f-kube-api-access-xbkrb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.705438 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.705536 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/527824ae-c763-4efc-ba39-1cd36664996f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.810606 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.810955 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.810993 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbkrb\" (UniqueName: \"kubernetes.io/projected/527824ae-c763-4efc-ba39-1cd36664996f-kube-api-access-xbkrb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.811117 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.811145 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/527824ae-c763-4efc-ba39-1cd36664996f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.812446 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/527824ae-c763-4efc-ba39-1cd36664996f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.815539 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.816045 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.816465 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.835161 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbkrb\" (UniqueName: \"kubernetes.io/projected/527824ae-c763-4efc-ba39-1cd36664996f-kube-api-access-xbkrb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8nq44\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:53 crc kubenswrapper[4919]: I0109 14:05:53.960767 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:05:54 crc kubenswrapper[4919]: I0109 14:05:54.482999 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44"] Jan 09 14:05:54 crc kubenswrapper[4919]: I0109 14:05:54.527418 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" event={"ID":"527824ae-c763-4efc-ba39-1cd36664996f","Type":"ContainerStarted","Data":"a6a8907865acfd28fd162be0b5c383fb29f93e48175a16ed71683d5833da30c1"} Jan 09 14:05:55 crc kubenswrapper[4919]: I0109 14:05:55.542876 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" event={"ID":"527824ae-c763-4efc-ba39-1cd36664996f","Type":"ContainerStarted","Data":"1e5f783ef8c9d71c018bec84e3241faf8838dd42b2e111a8823b7eb0d546179e"} Jan 09 14:05:55 crc kubenswrapper[4919]: I0109 14:05:55.574042 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" podStartSLOduration=2.10695789 podStartE2EDuration="2.574015319s" podCreationTimestamp="2026-01-09 14:05:53 +0000 UTC" firstStartedPulling="2026-01-09 14:05:54.489074957 +0000 UTC m=+2134.036914417" lastFinishedPulling="2026-01-09 14:05:54.956132396 +0000 UTC m=+2134.503971846" observedRunningTime="2026-01-09 14:05:55.563613241 +0000 UTC m=+2135.111452701" watchObservedRunningTime="2026-01-09 14:05:55.574015319 +0000 UTC m=+2135.121854769" Jan 09 14:07:00 crc kubenswrapper[4919]: I0109 14:07:00.090963 4919 generic.go:334] "Generic (PLEG): container finished" podID="527824ae-c763-4efc-ba39-1cd36664996f" containerID="1e5f783ef8c9d71c018bec84e3241faf8838dd42b2e111a8823b7eb0d546179e" exitCode=0 Jan 09 14:07:00 crc kubenswrapper[4919]: I0109 14:07:00.091053 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" event={"ID":"527824ae-c763-4efc-ba39-1cd36664996f","Type":"ContainerDied","Data":"1e5f783ef8c9d71c018bec84e3241faf8838dd42b2e111a8823b7eb0d546179e"} Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.732112 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.913197 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/527824ae-c763-4efc-ba39-1cd36664996f-ovncontroller-config-0\") pod \"527824ae-c763-4efc-ba39-1cd36664996f\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.913286 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbkrb\" (UniqueName: \"kubernetes.io/projected/527824ae-c763-4efc-ba39-1cd36664996f-kube-api-access-xbkrb\") pod \"527824ae-c763-4efc-ba39-1cd36664996f\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.913477 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-inventory\") pod \"527824ae-c763-4efc-ba39-1cd36664996f\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.913513 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ssh-key-openstack-edpm-ipam\") pod \"527824ae-c763-4efc-ba39-1cd36664996f\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.913568 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ovn-combined-ca-bundle\") pod \"527824ae-c763-4efc-ba39-1cd36664996f\" (UID: \"527824ae-c763-4efc-ba39-1cd36664996f\") " Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.921431 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527824ae-c763-4efc-ba39-1cd36664996f-kube-api-access-xbkrb" (OuterVolumeSpecName: "kube-api-access-xbkrb") pod "527824ae-c763-4efc-ba39-1cd36664996f" (UID: "527824ae-c763-4efc-ba39-1cd36664996f"). InnerVolumeSpecName "kube-api-access-xbkrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.921875 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "527824ae-c763-4efc-ba39-1cd36664996f" (UID: "527824ae-c763-4efc-ba39-1cd36664996f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.948461 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "527824ae-c763-4efc-ba39-1cd36664996f" (UID: "527824ae-c763-4efc-ba39-1cd36664996f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.957475 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-inventory" (OuterVolumeSpecName: "inventory") pod "527824ae-c763-4efc-ba39-1cd36664996f" (UID: "527824ae-c763-4efc-ba39-1cd36664996f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:07:01 crc kubenswrapper[4919]: I0109 14:07:01.992484 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527824ae-c763-4efc-ba39-1cd36664996f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "527824ae-c763-4efc-ba39-1cd36664996f" (UID: "527824ae-c763-4efc-ba39-1cd36664996f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.016794 4919 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.016820 4919 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/527824ae-c763-4efc-ba39-1cd36664996f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.016829 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbkrb\" (UniqueName: \"kubernetes.io/projected/527824ae-c763-4efc-ba39-1cd36664996f-kube-api-access-xbkrb\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.016840 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.016849 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/527824ae-c763-4efc-ba39-1cd36664996f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.115462 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" event={"ID":"527824ae-c763-4efc-ba39-1cd36664996f","Type":"ContainerDied","Data":"a6a8907865acfd28fd162be0b5c383fb29f93e48175a16ed71683d5833da30c1"} Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.115518 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6a8907865acfd28fd162be0b5c383fb29f93e48175a16ed71683d5833da30c1" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.115528 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8nq44" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.237267 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld"] Jan 09 14:07:02 crc kubenswrapper[4919]: E0109 14:07:02.238008 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527824ae-c763-4efc-ba39-1cd36664996f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.238029 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="527824ae-c763-4efc-ba39-1cd36664996f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.238253 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="527824ae-c763-4efc-ba39-1cd36664996f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.239066 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.242804 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.243118 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.243225 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.243406 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.243480 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.243972 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.247932 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld"] Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.423992 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.424109 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.424162 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.424191 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.424224 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7gh\" (UniqueName: \"kubernetes.io/projected/e9770e19-27d5-49ff-a358-7f455b3e6d8e-kube-api-access-jk7gh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.424268 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.526349 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.526405 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.526429 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk7gh\" (UniqueName: \"kubernetes.io/projected/e9770e19-27d5-49ff-a358-7f455b3e6d8e-kube-api-access-jk7gh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.526482 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.526518 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.526597 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.530534 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.530716 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.531311 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.531683 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.531871 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.545138 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk7gh\" (UniqueName: \"kubernetes.io/projected/e9770e19-27d5-49ff-a358-7f455b3e6d8e-kube-api-access-jk7gh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:02 crc kubenswrapper[4919]: I0109 14:07:02.570832 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:03 crc kubenswrapper[4919]: I0109 14:07:03.109318 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld"] Jan 09 14:07:03 crc kubenswrapper[4919]: I0109 14:07:03.127879 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" event={"ID":"e9770e19-27d5-49ff-a358-7f455b3e6d8e","Type":"ContainerStarted","Data":"05466ff2fd885b5c3d5020de55037731d7e423bfdba92f36c5e481b42c821faa"} Jan 09 14:07:05 crc kubenswrapper[4919]: I0109 14:07:05.148183 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" event={"ID":"e9770e19-27d5-49ff-a358-7f455b3e6d8e","Type":"ContainerStarted","Data":"1629aa489e8e53f70cc10572dfa727a5d7aef27abe6c24c00c70556fc2c7e2cb"} Jan 09 14:07:05 crc kubenswrapper[4919]: I0109 14:07:05.171737 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" podStartSLOduration=2.134863445 podStartE2EDuration="3.171709148s" podCreationTimestamp="2026-01-09 14:07:02 +0000 UTC" firstStartedPulling="2026-01-09 14:07:03.115583171 +0000 UTC m=+2202.663422621" lastFinishedPulling="2026-01-09 14:07:04.152428874 +0000 UTC m=+2203.700268324" observedRunningTime="2026-01-09 14:07:05.169099703 +0000 UTC m=+2204.716939173" watchObservedRunningTime="2026-01-09 14:07:05.171709148 +0000 UTC m=+2204.719548599" Jan 09 14:07:21 crc kubenswrapper[4919]: I0109 14:07:21.247258 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:07:21 crc kubenswrapper[4919]: I0109 14:07:21.248407 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:07:51 crc kubenswrapper[4919]: I0109 14:07:51.246878 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:07:51 crc kubenswrapper[4919]: I0109 14:07:51.247920 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:07:55 crc kubenswrapper[4919]: I0109 14:07:55.569575 4919 generic.go:334] "Generic (PLEG): container finished" podID="e9770e19-27d5-49ff-a358-7f455b3e6d8e" containerID="1629aa489e8e53f70cc10572dfa727a5d7aef27abe6c24c00c70556fc2c7e2cb" exitCode=0 Jan 09 14:07:55 crc kubenswrapper[4919]: I0109 14:07:55.569648 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" event={"ID":"e9770e19-27d5-49ff-a358-7f455b3e6d8e","Type":"ContainerDied","Data":"1629aa489e8e53f70cc10572dfa727a5d7aef27abe6c24c00c70556fc2c7e2cb"} Jan 09 14:07:56 crc kubenswrapper[4919]: I0109 14:07:56.974875 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.118635 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.118786 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-ssh-key-openstack-edpm-ipam\") pod \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.118884 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-inventory\") pod \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.119024 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-nova-metadata-neutron-config-0\") pod \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.119123 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-metadata-combined-ca-bundle\") pod \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.119264 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk7gh\" (UniqueName: \"kubernetes.io/projected/e9770e19-27d5-49ff-a358-7f455b3e6d8e-kube-api-access-jk7gh\") pod \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\" (UID: \"e9770e19-27d5-49ff-a358-7f455b3e6d8e\") " Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.126372 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e9770e19-27d5-49ff-a358-7f455b3e6d8e" (UID: "e9770e19-27d5-49ff-a358-7f455b3e6d8e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.126397 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9770e19-27d5-49ff-a358-7f455b3e6d8e-kube-api-access-jk7gh" (OuterVolumeSpecName: "kube-api-access-jk7gh") pod "e9770e19-27d5-49ff-a358-7f455b3e6d8e" (UID: "e9770e19-27d5-49ff-a358-7f455b3e6d8e"). InnerVolumeSpecName "kube-api-access-jk7gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.151316 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e9770e19-27d5-49ff-a358-7f455b3e6d8e" (UID: "e9770e19-27d5-49ff-a358-7f455b3e6d8e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.151660 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e9770e19-27d5-49ff-a358-7f455b3e6d8e" (UID: "e9770e19-27d5-49ff-a358-7f455b3e6d8e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.151788 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e9770e19-27d5-49ff-a358-7f455b3e6d8e" (UID: "e9770e19-27d5-49ff-a358-7f455b3e6d8e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.153577 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-inventory" (OuterVolumeSpecName: "inventory") pod "e9770e19-27d5-49ff-a358-7f455b3e6d8e" (UID: "e9770e19-27d5-49ff-a358-7f455b3e6d8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.221715 4919 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.221755 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk7gh\" (UniqueName: \"kubernetes.io/projected/e9770e19-27d5-49ff-a358-7f455b3e6d8e-kube-api-access-jk7gh\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.221765 4919 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.221779 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.221789 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.221797 4919 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e9770e19-27d5-49ff-a358-7f455b3e6d8e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.589836 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" event={"ID":"e9770e19-27d5-49ff-a358-7f455b3e6d8e","Type":"ContainerDied","Data":"05466ff2fd885b5c3d5020de55037731d7e423bfdba92f36c5e481b42c821faa"} Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.590139 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.590146 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05466ff2fd885b5c3d5020de55037731d7e423bfdba92f36c5e481b42c821faa" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.677535 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6"] Jan 09 14:07:57 crc kubenswrapper[4919]: E0109 14:07:57.678004 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9770e19-27d5-49ff-a358-7f455b3e6d8e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.678022 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9770e19-27d5-49ff-a358-7f455b3e6d8e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.678295 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9770e19-27d5-49ff-a358-7f455b3e6d8e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.678957 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.682286 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.682449 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.682481 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.682593 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.682705 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.689168 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6"] Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.832576 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.832704 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.832744 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw226\" (UniqueName: \"kubernetes.io/projected/acecffca-8dfb-4702-851a-f8dfe2659e98-kube-api-access-zw226\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.832768 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.832919 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.934634 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.934691 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw226\" (UniqueName: \"kubernetes.io/projected/acecffca-8dfb-4702-851a-f8dfe2659e98-kube-api-access-zw226\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.934713 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.934795 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.934903 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.940457 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.940517 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.941227 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.942550 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:57 crc kubenswrapper[4919]: I0109 14:07:57.972767 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw226\" (UniqueName: \"kubernetes.io/projected/acecffca-8dfb-4702-851a-f8dfe2659e98-kube-api-access-zw226\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k82m6\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:58 crc kubenswrapper[4919]: I0109 14:07:58.008927 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:07:58 crc kubenswrapper[4919]: I0109 14:07:58.563383 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6"] Jan 09 14:07:58 crc kubenswrapper[4919]: I0109 14:07:58.568401 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 14:07:58 crc kubenswrapper[4919]: I0109 14:07:58.609865 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" event={"ID":"acecffca-8dfb-4702-851a-f8dfe2659e98","Type":"ContainerStarted","Data":"60de3f63468c65092e193cf50a268d13a622219d413b2c672d14900f44e9107e"} Jan 09 14:08:00 crc kubenswrapper[4919]: I0109 14:08:00.627569 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" event={"ID":"acecffca-8dfb-4702-851a-f8dfe2659e98","Type":"ContainerStarted","Data":"b20a890cd69763d92c9e29902fb01658a18ca7c198bc7bc233ca85a3ee0c6857"} Jan 09 14:08:00 crc kubenswrapper[4919]: I0109 14:08:00.651603 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" podStartSLOduration=2.854590616 podStartE2EDuration="3.651581124s" podCreationTimestamp="2026-01-09 14:07:57 +0000 UTC" firstStartedPulling="2026-01-09 14:07:58.568182217 +0000 UTC m=+2258.116021667" lastFinishedPulling="2026-01-09 14:07:59.365172725 +0000 UTC m=+2258.913012175" observedRunningTime="2026-01-09 14:08:00.648098457 +0000 UTC m=+2260.195937917" watchObservedRunningTime="2026-01-09 14:08:00.651581124 +0000 UTC m=+2260.199420574" Jan 09 14:08:21 crc kubenswrapper[4919]: I0109 14:08:21.247161 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:08:21 crc kubenswrapper[4919]: I0109 14:08:21.247692 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:08:21 crc kubenswrapper[4919]: I0109 14:08:21.247739 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 14:08:21 crc kubenswrapper[4919]: I0109 14:08:21.248543 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 14:08:21 crc kubenswrapper[4919]: I0109 14:08:21.248600 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" gracePeriod=600 Jan 09 14:08:21 crc kubenswrapper[4919]: I0109 14:08:21.819352 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" exitCode=0 Jan 09 14:08:21 crc kubenswrapper[4919]: I0109 14:08:21.819456 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794"} Jan 09 14:08:21 crc kubenswrapper[4919]: I0109 14:08:21.819546 4919 scope.go:117] "RemoveContainer" containerID="08b1c11299df27ad98a3ca953b44e2744c53ddd036341f81b00480965189197d" Jan 09 14:08:21 crc kubenswrapper[4919]: E0109 14:08:21.871799 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:08:22 crc kubenswrapper[4919]: I0109 14:08:22.830079 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:08:22 crc kubenswrapper[4919]: E0109 14:08:22.831311 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.510066 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xn5qr"] Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.513717 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.528197 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xn5qr"] Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.646563 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-utilities\") pod \"redhat-marketplace-xn5qr\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.646611 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-catalog-content\") pod \"redhat-marketplace-xn5qr\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.646961 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cv5s\" (UniqueName: \"kubernetes.io/projected/39ae1425-7567-4310-a419-5a0103747339-kube-api-access-5cv5s\") pod \"redhat-marketplace-xn5qr\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.708029 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rh7vp"] Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.710626 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.733461 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rh7vp"] Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.749839 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cv5s\" (UniqueName: \"kubernetes.io/projected/39ae1425-7567-4310-a419-5a0103747339-kube-api-access-5cv5s\") pod \"redhat-marketplace-xn5qr\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.749947 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-utilities\") pod \"redhat-marketplace-xn5qr\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.749988 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-catalog-content\") pod \"redhat-marketplace-xn5qr\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.750693 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-catalog-content\") pod \"redhat-marketplace-xn5qr\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.751011 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-utilities\") pod \"redhat-marketplace-xn5qr\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.780383 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cv5s\" (UniqueName: \"kubernetes.io/projected/39ae1425-7567-4310-a419-5a0103747339-kube-api-access-5cv5s\") pod \"redhat-marketplace-xn5qr\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.848075 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.851945 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s74fm\" (UniqueName: \"kubernetes.io/projected/cce4ea03-b5cd-4a55-93be-08bbc712745f-kube-api-access-s74fm\") pod \"certified-operators-rh7vp\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.852013 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-utilities\") pod \"certified-operators-rh7vp\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.852037 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-catalog-content\") pod \"certified-operators-rh7vp\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.954246 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s74fm\" (UniqueName: \"kubernetes.io/projected/cce4ea03-b5cd-4a55-93be-08bbc712745f-kube-api-access-s74fm\") pod \"certified-operators-rh7vp\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.954327 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-utilities\") pod \"certified-operators-rh7vp\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.954350 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-catalog-content\") pod \"certified-operators-rh7vp\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.954859 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-catalog-content\") pod \"certified-operators-rh7vp\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.955352 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-utilities\") pod \"certified-operators-rh7vp\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:33 crc kubenswrapper[4919]: I0109 14:08:33.990190 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s74fm\" (UniqueName: \"kubernetes.io/projected/cce4ea03-b5cd-4a55-93be-08bbc712745f-kube-api-access-s74fm\") pod \"certified-operators-rh7vp\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:34 crc kubenswrapper[4919]: I0109 14:08:34.028369 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:34 crc kubenswrapper[4919]: I0109 14:08:34.459018 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xn5qr"] Jan 09 14:08:34 crc kubenswrapper[4919]: I0109 14:08:34.960807 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xn5qr" event={"ID":"39ae1425-7567-4310-a419-5a0103747339","Type":"ContainerStarted","Data":"ca65f7aaea688e39ef8c1fba85fd978cee3f8e81afc57e0bb879657523cb3039"} Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:35.751461 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:08:36 crc kubenswrapper[4919]: E0109 14:08:35.751977 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.018466 4919 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" podUID="488f8708-4c49-429f-9697-a00b8fadd486" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.018783 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-75f6ff484-ll94k" podUID="488f8708-4c49-429f-9697-a00b8fadd486" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.139899 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rh7vp"] Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.160276 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tg8gf"] Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.162672 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.196990 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tg8gf"] Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.208112 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-utilities\") pod \"community-operators-tg8gf\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.208191 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gnc6\" (UniqueName: \"kubernetes.io/projected/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-kube-api-access-7gnc6\") pod \"community-operators-tg8gf\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.208370 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-catalog-content\") pod \"community-operators-tg8gf\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.310777 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-catalog-content\") pod \"community-operators-tg8gf\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.310934 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-utilities\") pod \"community-operators-tg8gf\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.310988 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gnc6\" (UniqueName: \"kubernetes.io/projected/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-kube-api-access-7gnc6\") pod \"community-operators-tg8gf\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.312511 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-catalog-content\") pod \"community-operators-tg8gf\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.313022 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-utilities\") pod \"community-operators-tg8gf\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.347067 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gnc6\" (UniqueName: \"kubernetes.io/projected/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-kube-api-access-7gnc6\") pod \"community-operators-tg8gf\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.542216 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:36 crc kubenswrapper[4919]: W0109 14:08:36.859471 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda48d6b53_344c_4ca2_bb5a_4cb5fa446eb2.slice/crio-262774f48305f86162c492ef65adf6782e0fe7a30b949ff4e40748e47e602f65 WatchSource:0}: Error finding container 262774f48305f86162c492ef65adf6782e0fe7a30b949ff4e40748e47e602f65: Status 404 returned error can't find the container with id 262774f48305f86162c492ef65adf6782e0fe7a30b949ff4e40748e47e602f65 Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.865880 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tg8gf"] Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.978605 4919 generic.go:334] "Generic (PLEG): container finished" podID="39ae1425-7567-4310-a419-5a0103747339" containerID="55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9" exitCode=0 Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.978768 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xn5qr" event={"ID":"39ae1425-7567-4310-a419-5a0103747339","Type":"ContainerDied","Data":"55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9"} Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.980111 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg8gf" event={"ID":"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2","Type":"ContainerStarted","Data":"262774f48305f86162c492ef65adf6782e0fe7a30b949ff4e40748e47e602f65"} Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.982062 4919 generic.go:334] "Generic (PLEG): container finished" podID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerID="3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1" exitCode=0 Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.982087 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh7vp" event={"ID":"cce4ea03-b5cd-4a55-93be-08bbc712745f","Type":"ContainerDied","Data":"3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1"} Jan 09 14:08:36 crc kubenswrapper[4919]: I0109 14:08:36.982100 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh7vp" event={"ID":"cce4ea03-b5cd-4a55-93be-08bbc712745f","Type":"ContainerStarted","Data":"cd247692ea5380b4c83a3fcdea0efe28547fa521216b1a05e667a24a636c0810"} Jan 09 14:08:37 crc kubenswrapper[4919]: I0109 14:08:37.992847 4919 generic.go:334] "Generic (PLEG): container finished" podID="39ae1425-7567-4310-a419-5a0103747339" containerID="86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa" exitCode=0 Jan 09 14:08:37 crc kubenswrapper[4919]: I0109 14:08:37.992896 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xn5qr" event={"ID":"39ae1425-7567-4310-a419-5a0103747339","Type":"ContainerDied","Data":"86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa"} Jan 09 14:08:37 crc kubenswrapper[4919]: I0109 14:08:37.995966 4919 generic.go:334] "Generic (PLEG): container finished" podID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerID="517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec" exitCode=0 Jan 09 14:08:37 crc kubenswrapper[4919]: I0109 14:08:37.996062 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg8gf" event={"ID":"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2","Type":"ContainerDied","Data":"517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec"} Jan 09 14:08:38 crc kubenswrapper[4919]: I0109 14:08:38.000841 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh7vp" event={"ID":"cce4ea03-b5cd-4a55-93be-08bbc712745f","Type":"ContainerStarted","Data":"6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c"} Jan 09 14:08:39 crc kubenswrapper[4919]: I0109 14:08:39.011317 4919 generic.go:334] "Generic (PLEG): container finished" podID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerID="6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c" exitCode=0 Jan 09 14:08:39 crc kubenswrapper[4919]: I0109 14:08:39.011614 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh7vp" event={"ID":"cce4ea03-b5cd-4a55-93be-08bbc712745f","Type":"ContainerDied","Data":"6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c"} Jan 09 14:08:39 crc kubenswrapper[4919]: I0109 14:08:39.018198 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xn5qr" event={"ID":"39ae1425-7567-4310-a419-5a0103747339","Type":"ContainerStarted","Data":"f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a"} Jan 09 14:08:39 crc kubenswrapper[4919]: I0109 14:08:39.021317 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg8gf" event={"ID":"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2","Type":"ContainerStarted","Data":"6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b"} Jan 09 14:08:39 crc kubenswrapper[4919]: I0109 14:08:39.062506 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xn5qr" podStartSLOduration=4.607143456 podStartE2EDuration="6.062488311s" podCreationTimestamp="2026-01-09 14:08:33 +0000 UTC" firstStartedPulling="2026-01-09 14:08:36.980615692 +0000 UTC m=+2296.528455142" lastFinishedPulling="2026-01-09 14:08:38.435960547 +0000 UTC m=+2297.983799997" observedRunningTime="2026-01-09 14:08:39.061775734 +0000 UTC m=+2298.609615184" watchObservedRunningTime="2026-01-09 14:08:39.062488311 +0000 UTC m=+2298.610327761" Jan 09 14:08:40 crc kubenswrapper[4919]: I0109 14:08:40.031036 4919 generic.go:334] "Generic (PLEG): container finished" podID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerID="6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b" exitCode=0 Jan 09 14:08:40 crc kubenswrapper[4919]: I0109 14:08:40.031079 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg8gf" event={"ID":"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2","Type":"ContainerDied","Data":"6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b"} Jan 09 14:08:40 crc kubenswrapper[4919]: I0109 14:08:40.034285 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh7vp" event={"ID":"cce4ea03-b5cd-4a55-93be-08bbc712745f","Type":"ContainerStarted","Data":"2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329"} Jan 09 14:08:40 crc kubenswrapper[4919]: I0109 14:08:40.082771 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rh7vp" podStartSLOduration=4.414508679 podStartE2EDuration="7.082738989s" podCreationTimestamp="2026-01-09 14:08:33 +0000 UTC" firstStartedPulling="2026-01-09 14:08:36.983901874 +0000 UTC m=+2296.531741324" lastFinishedPulling="2026-01-09 14:08:39.652132184 +0000 UTC m=+2299.199971634" observedRunningTime="2026-01-09 14:08:40.073760895 +0000 UTC m=+2299.621600345" watchObservedRunningTime="2026-01-09 14:08:40.082738989 +0000 UTC m=+2299.630578429" Jan 09 14:08:42 crc kubenswrapper[4919]: I0109 14:08:42.069134 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg8gf" event={"ID":"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2","Type":"ContainerStarted","Data":"452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c"} Jan 09 14:08:42 crc kubenswrapper[4919]: I0109 14:08:42.100389 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tg8gf" podStartSLOduration=2.961262576 podStartE2EDuration="6.100371006s" podCreationTimestamp="2026-01-09 14:08:36 +0000 UTC" firstStartedPulling="2026-01-09 14:08:37.999591159 +0000 UTC m=+2297.547430609" lastFinishedPulling="2026-01-09 14:08:41.138699589 +0000 UTC m=+2300.686539039" observedRunningTime="2026-01-09 14:08:42.097767561 +0000 UTC m=+2301.645607011" watchObservedRunningTime="2026-01-09 14:08:42.100371006 +0000 UTC m=+2301.648210456" Jan 09 14:08:43 crc kubenswrapper[4919]: I0109 14:08:43.849493 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:43 crc kubenswrapper[4919]: I0109 14:08:43.849857 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:43 crc kubenswrapper[4919]: I0109 14:08:43.895687 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:44 crc kubenswrapper[4919]: I0109 14:08:44.028729 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:44 crc kubenswrapper[4919]: I0109 14:08:44.028873 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:44 crc kubenswrapper[4919]: I0109 14:08:44.074962 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:44 crc kubenswrapper[4919]: I0109 14:08:44.143951 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:44 crc kubenswrapper[4919]: I0109 14:08:44.149595 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:46 crc kubenswrapper[4919]: I0109 14:08:46.501065 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xn5qr"] Jan 09 14:08:46 crc kubenswrapper[4919]: I0109 14:08:46.501616 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xn5qr" podUID="39ae1425-7567-4310-a419-5a0103747339" containerName="registry-server" containerID="cri-o://f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a" gracePeriod=2 Jan 09 14:08:46 crc kubenswrapper[4919]: I0109 14:08:46.542394 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:46 crc kubenswrapper[4919]: I0109 14:08:46.542626 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:46 crc kubenswrapper[4919]: I0109 14:08:46.588348 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:46 crc kubenswrapper[4919]: I0109 14:08:46.699517 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rh7vp"] Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.113787 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rh7vp" podUID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerName="registry-server" containerID="cri-o://2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329" gracePeriod=2 Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.161016 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.730329 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.742465 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-utilities\") pod \"cce4ea03-b5cd-4a55-93be-08bbc712745f\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.742772 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-catalog-content\") pod \"cce4ea03-b5cd-4a55-93be-08bbc712745f\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.742804 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s74fm\" (UniqueName: \"kubernetes.io/projected/cce4ea03-b5cd-4a55-93be-08bbc712745f-kube-api-access-s74fm\") pod \"cce4ea03-b5cd-4a55-93be-08bbc712745f\" (UID: \"cce4ea03-b5cd-4a55-93be-08bbc712745f\") " Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.743272 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-utilities" (OuterVolumeSpecName: "utilities") pod "cce4ea03-b5cd-4a55-93be-08bbc712745f" (UID: "cce4ea03-b5cd-4a55-93be-08bbc712745f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.743381 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.749634 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce4ea03-b5cd-4a55-93be-08bbc712745f-kube-api-access-s74fm" (OuterVolumeSpecName: "kube-api-access-s74fm") pod "cce4ea03-b5cd-4a55-93be-08bbc712745f" (UID: "cce4ea03-b5cd-4a55-93be-08bbc712745f"). InnerVolumeSpecName "kube-api-access-s74fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.805112 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cce4ea03-b5cd-4a55-93be-08bbc712745f" (UID: "cce4ea03-b5cd-4a55-93be-08bbc712745f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.845117 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cce4ea03-b5cd-4a55-93be-08bbc712745f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:08:47 crc kubenswrapper[4919]: I0109 14:08:47.845482 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s74fm\" (UniqueName: \"kubernetes.io/projected/cce4ea03-b5cd-4a55-93be-08bbc712745f-kube-api-access-s74fm\") on node \"crc\" DevicePath \"\"" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.088717 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.128435 4919 generic.go:334] "Generic (PLEG): container finished" podID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerID="2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329" exitCode=0 Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.128503 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh7vp" event={"ID":"cce4ea03-b5cd-4a55-93be-08bbc712745f","Type":"ContainerDied","Data":"2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329"} Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.128536 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh7vp" event={"ID":"cce4ea03-b5cd-4a55-93be-08bbc712745f","Type":"ContainerDied","Data":"cd247692ea5380b4c83a3fcdea0efe28547fa521216b1a05e667a24a636c0810"} Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.128556 4919 scope.go:117] "RemoveContainer" containerID="2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.128737 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh7vp" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.132856 4919 generic.go:334] "Generic (PLEG): container finished" podID="39ae1425-7567-4310-a419-5a0103747339" containerID="f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a" exitCode=0 Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.133444 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xn5qr" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.133712 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xn5qr" event={"ID":"39ae1425-7567-4310-a419-5a0103747339","Type":"ContainerDied","Data":"f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a"} Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.133745 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xn5qr" event={"ID":"39ae1425-7567-4310-a419-5a0103747339","Type":"ContainerDied","Data":"ca65f7aaea688e39ef8c1fba85fd978cee3f8e81afc57e0bb879657523cb3039"} Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.149572 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-utilities\") pod \"39ae1425-7567-4310-a419-5a0103747339\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.149768 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-catalog-content\") pod \"39ae1425-7567-4310-a419-5a0103747339\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.149865 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cv5s\" (UniqueName: \"kubernetes.io/projected/39ae1425-7567-4310-a419-5a0103747339-kube-api-access-5cv5s\") pod \"39ae1425-7567-4310-a419-5a0103747339\" (UID: \"39ae1425-7567-4310-a419-5a0103747339\") " Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.156918 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-utilities" (OuterVolumeSpecName: "utilities") pod "39ae1425-7567-4310-a419-5a0103747339" (UID: "39ae1425-7567-4310-a419-5a0103747339"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.160396 4919 scope.go:117] "RemoveContainer" containerID="6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.160603 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ae1425-7567-4310-a419-5a0103747339-kube-api-access-5cv5s" (OuterVolumeSpecName: "kube-api-access-5cv5s") pod "39ae1425-7567-4310-a419-5a0103747339" (UID: "39ae1425-7567-4310-a419-5a0103747339"). InnerVolumeSpecName "kube-api-access-5cv5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.184355 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39ae1425-7567-4310-a419-5a0103747339" (UID: "39ae1425-7567-4310-a419-5a0103747339"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.189354 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rh7vp"] Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.200768 4919 scope.go:117] "RemoveContainer" containerID="3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.202710 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rh7vp"] Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.228057 4919 scope.go:117] "RemoveContainer" containerID="2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329" Jan 09 14:08:48 crc kubenswrapper[4919]: E0109 14:08:48.228599 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329\": container with ID starting with 2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329 not found: ID does not exist" containerID="2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.228655 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329"} err="failed to get container status \"2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329\": rpc error: code = NotFound desc = could not find container \"2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329\": container with ID starting with 2af7a722cd60b0b8a96ddf25db72806a62c11dab4bbdac778a4232a58ed10329 not found: ID does not exist" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.228685 4919 scope.go:117] "RemoveContainer" containerID="6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c" Jan 09 14:08:48 crc kubenswrapper[4919]: E0109 14:08:48.229051 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c\": container with ID starting with 6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c not found: ID does not exist" containerID="6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.229089 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c"} err="failed to get container status \"6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c\": rpc error: code = NotFound desc = could not find container \"6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c\": container with ID starting with 6f219bda0ad58bad9a7b94707c8706d5f34f5c3c16d73299a325319b5af8a93c not found: ID does not exist" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.229112 4919 scope.go:117] "RemoveContainer" containerID="3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1" Jan 09 14:08:48 crc kubenswrapper[4919]: E0109 14:08:48.229469 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1\": container with ID starting with 3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1 not found: ID does not exist" containerID="3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.229499 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1"} err="failed to get container status \"3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1\": rpc error: code = NotFound desc = could not find container \"3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1\": container with ID starting with 3931e0537d748dac705e4d88b09d06b9ce8cd2ecc983f78813afddbcc1126ac1 not found: ID does not exist" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.229516 4919 scope.go:117] "RemoveContainer" containerID="f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.252751 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.252783 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cv5s\" (UniqueName: \"kubernetes.io/projected/39ae1425-7567-4310-a419-5a0103747339-kube-api-access-5cv5s\") on node \"crc\" DevicePath \"\"" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.252793 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ae1425-7567-4310-a419-5a0103747339-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.252869 4919 scope.go:117] "RemoveContainer" containerID="86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.287933 4919 scope.go:117] "RemoveContainer" containerID="55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.316049 4919 scope.go:117] "RemoveContainer" containerID="f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a" Jan 09 14:08:48 crc kubenswrapper[4919]: E0109 14:08:48.316632 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a\": container with ID starting with f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a not found: ID does not exist" containerID="f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.316682 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a"} err="failed to get container status \"f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a\": rpc error: code = NotFound desc = could not find container \"f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a\": container with ID starting with f13d3e8867e67e617288c021989bf494c9970522d3016799bf55a6097931bb6a not found: ID does not exist" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.316711 4919 scope.go:117] "RemoveContainer" containerID="86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa" Jan 09 14:08:48 crc kubenswrapper[4919]: E0109 14:08:48.317071 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa\": container with ID starting with 86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa not found: ID does not exist" containerID="86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.317099 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa"} err="failed to get container status \"86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa\": rpc error: code = NotFound desc = could not find container \"86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa\": container with ID starting with 86f118e0e775443e669cfc08ec9caa580a39be3e3aa1824dfe393a8475280caa not found: ID does not exist" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.317117 4919 scope.go:117] "RemoveContainer" containerID="55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9" Jan 09 14:08:48 crc kubenswrapper[4919]: E0109 14:08:48.317579 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9\": container with ID starting with 55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9 not found: ID does not exist" containerID="55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.317610 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9"} err="failed to get container status \"55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9\": rpc error: code = NotFound desc = could not find container \"55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9\": container with ID starting with 55e821d10fd3819eb03e767f03651da7797702d6a709119838e9f17e01dff6a9 not found: ID does not exist" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.469175 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xn5qr"] Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.479551 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xn5qr"] Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.763549 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ae1425-7567-4310-a419-5a0103747339" path="/var/lib/kubelet/pods/39ae1425-7567-4310-a419-5a0103747339/volumes" Jan 09 14:08:48 crc kubenswrapper[4919]: I0109 14:08:48.764600 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce4ea03-b5cd-4a55-93be-08bbc712745f" path="/var/lib/kubelet/pods/cce4ea03-b5cd-4a55-93be-08bbc712745f/volumes" Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.106225 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tg8gf"] Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.143658 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tg8gf" podUID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerName="registry-server" containerID="cri-o://452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c" gracePeriod=2 Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.607252 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.680144 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-catalog-content\") pod \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.680387 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gnc6\" (UniqueName: \"kubernetes.io/projected/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-kube-api-access-7gnc6\") pod \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.680443 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-utilities\") pod \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\" (UID: \"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2\") " Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.681115 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-utilities" (OuterVolumeSpecName: "utilities") pod "a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" (UID: "a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.686715 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-kube-api-access-7gnc6" (OuterVolumeSpecName: "kube-api-access-7gnc6") pod "a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" (UID: "a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2"). InnerVolumeSpecName "kube-api-access-7gnc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.733962 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" (UID: "a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.752301 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:08:49 crc kubenswrapper[4919]: E0109 14:08:49.752689 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.781799 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.781845 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gnc6\" (UniqueName: \"kubernetes.io/projected/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-kube-api-access-7gnc6\") on node \"crc\" DevicePath \"\"" Jan 09 14:08:49 crc kubenswrapper[4919]: I0109 14:08:49.781861 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.154757 4919 generic.go:334] "Generic (PLEG): container finished" podID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerID="452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c" exitCode=0 Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.154810 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg8gf" event={"ID":"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2","Type":"ContainerDied","Data":"452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c"} Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.154852 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg8gf" event={"ID":"a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2","Type":"ContainerDied","Data":"262774f48305f86162c492ef65adf6782e0fe7a30b949ff4e40748e47e602f65"} Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.154876 4919 scope.go:117] "RemoveContainer" containerID="452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.154872 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tg8gf" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.182012 4919 scope.go:117] "RemoveContainer" containerID="6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.194258 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tg8gf"] Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.204725 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tg8gf"] Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.216786 4919 scope.go:117] "RemoveContainer" containerID="517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.245820 4919 scope.go:117] "RemoveContainer" containerID="452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c" Jan 09 14:08:50 crc kubenswrapper[4919]: E0109 14:08:50.246694 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c\": container with ID starting with 452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c not found: ID does not exist" containerID="452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.246753 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c"} err="failed to get container status \"452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c\": rpc error: code = NotFound desc = could not find container \"452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c\": container with ID starting with 452deb3a5c4ceaf757bb34e757d19f7cfcee0a0aa936710f0e8ceaf05023b57c not found: ID does not exist" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.246789 4919 scope.go:117] "RemoveContainer" containerID="6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b" Jan 09 14:08:50 crc kubenswrapper[4919]: E0109 14:08:50.247154 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b\": container with ID starting with 6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b not found: ID does not exist" containerID="6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.247192 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b"} err="failed to get container status \"6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b\": rpc error: code = NotFound desc = could not find container \"6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b\": container with ID starting with 6c687deda10ca64b4ae1a81a07ef4b4a5ec5803e3d7817919008fb50243d366b not found: ID does not exist" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.247237 4919 scope.go:117] "RemoveContainer" containerID="517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec" Jan 09 14:08:50 crc kubenswrapper[4919]: E0109 14:08:50.247575 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec\": container with ID starting with 517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec not found: ID does not exist" containerID="517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.247594 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec"} err="failed to get container status \"517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec\": rpc error: code = NotFound desc = could not find container \"517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec\": container with ID starting with 517e0d479f3f097b1b6ab29a656fec764959496a6f49b7afd88597f7dd2881ec not found: ID does not exist" Jan 09 14:08:50 crc kubenswrapper[4919]: I0109 14:08:50.766493 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" path="/var/lib/kubelet/pods/a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2/volumes" Jan 09 14:09:01 crc kubenswrapper[4919]: I0109 14:09:01.752319 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:09:01 crc kubenswrapper[4919]: E0109 14:09:01.753027 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:09:12 crc kubenswrapper[4919]: I0109 14:09:12.752008 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:09:12 crc kubenswrapper[4919]: E0109 14:09:12.752806 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:09:27 crc kubenswrapper[4919]: I0109 14:09:27.751742 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:09:27 crc kubenswrapper[4919]: E0109 14:09:27.752458 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:09:41 crc kubenswrapper[4919]: I0109 14:09:41.751866 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:09:41 crc kubenswrapper[4919]: E0109 14:09:41.752611 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:09:52 crc kubenswrapper[4919]: I0109 14:09:52.753273 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:09:52 crc kubenswrapper[4919]: E0109 14:09:52.754035 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:10:04 crc kubenswrapper[4919]: I0109 14:10:04.751695 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:10:04 crc kubenswrapper[4919]: E0109 14:10:04.752444 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:10:17 crc kubenswrapper[4919]: I0109 14:10:17.751848 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:10:17 crc kubenswrapper[4919]: E0109 14:10:17.752569 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:10:28 crc kubenswrapper[4919]: I0109 14:10:28.751934 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:10:28 crc kubenswrapper[4919]: E0109 14:10:28.752706 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:10:40 crc kubenswrapper[4919]: I0109 14:10:40.757622 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:10:40 crc kubenswrapper[4919]: E0109 14:10:40.758506 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:10:52 crc kubenswrapper[4919]: I0109 14:10:52.751726 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:10:52 crc kubenswrapper[4919]: E0109 14:10:52.752881 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:11:04 crc kubenswrapper[4919]: I0109 14:11:04.752413 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:11:04 crc kubenswrapper[4919]: E0109 14:11:04.754116 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:11:15 crc kubenswrapper[4919]: I0109 14:11:15.751508 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:11:15 crc kubenswrapper[4919]: E0109 14:11:15.752365 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:11:30 crc kubenswrapper[4919]: I0109 14:11:30.759088 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:11:30 crc kubenswrapper[4919]: E0109 14:11:30.760330 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:11:42 crc kubenswrapper[4919]: I0109 14:11:42.752656 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:11:42 crc kubenswrapper[4919]: E0109 14:11:42.753315 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:11:54 crc kubenswrapper[4919]: I0109 14:11:54.752050 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:11:54 crc kubenswrapper[4919]: E0109 14:11:54.752880 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:12:06 crc kubenswrapper[4919]: I0109 14:12:06.753022 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:12:06 crc kubenswrapper[4919]: E0109 14:12:06.753825 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:12:20 crc kubenswrapper[4919]: I0109 14:12:20.774194 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:12:20 crc kubenswrapper[4919]: E0109 14:12:20.775750 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:12:32 crc kubenswrapper[4919]: I0109 14:12:32.759296 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:12:32 crc kubenswrapper[4919]: E0109 14:12:32.760468 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:12:39 crc kubenswrapper[4919]: I0109 14:12:39.475744 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5f95dfdc65-kz6rq" podUID="e09e5f52-5a74-4a7c-bd84-079835a21fec" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 09 14:12:44 crc kubenswrapper[4919]: I0109 14:12:44.360914 4919 generic.go:334] "Generic (PLEG): container finished" podID="acecffca-8dfb-4702-851a-f8dfe2659e98" containerID="b20a890cd69763d92c9e29902fb01658a18ca7c198bc7bc233ca85a3ee0c6857" exitCode=0 Jan 09 14:12:44 crc kubenswrapper[4919]: I0109 14:12:44.361065 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" event={"ID":"acecffca-8dfb-4702-851a-f8dfe2659e98","Type":"ContainerDied","Data":"b20a890cd69763d92c9e29902fb01658a18ca7c198bc7bc233ca85a3ee0c6857"} Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.807029 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.897404 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw226\" (UniqueName: \"kubernetes.io/projected/acecffca-8dfb-4702-851a-f8dfe2659e98-kube-api-access-zw226\") pod \"acecffca-8dfb-4702-851a-f8dfe2659e98\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.897616 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-inventory\") pod \"acecffca-8dfb-4702-851a-f8dfe2659e98\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.897721 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-ssh-key-openstack-edpm-ipam\") pod \"acecffca-8dfb-4702-851a-f8dfe2659e98\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.897776 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-combined-ca-bundle\") pod \"acecffca-8dfb-4702-851a-f8dfe2659e98\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.897834 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-secret-0\") pod \"acecffca-8dfb-4702-851a-f8dfe2659e98\" (UID: \"acecffca-8dfb-4702-851a-f8dfe2659e98\") " Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.904328 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acecffca-8dfb-4702-851a-f8dfe2659e98-kube-api-access-zw226" (OuterVolumeSpecName: "kube-api-access-zw226") pod "acecffca-8dfb-4702-851a-f8dfe2659e98" (UID: "acecffca-8dfb-4702-851a-f8dfe2659e98"). InnerVolumeSpecName "kube-api-access-zw226". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.906258 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "acecffca-8dfb-4702-851a-f8dfe2659e98" (UID: "acecffca-8dfb-4702-851a-f8dfe2659e98"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.925887 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-inventory" (OuterVolumeSpecName: "inventory") pod "acecffca-8dfb-4702-851a-f8dfe2659e98" (UID: "acecffca-8dfb-4702-851a-f8dfe2659e98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.926980 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "acecffca-8dfb-4702-851a-f8dfe2659e98" (UID: "acecffca-8dfb-4702-851a-f8dfe2659e98"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:12:45 crc kubenswrapper[4919]: I0109 14:12:45.928984 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "acecffca-8dfb-4702-851a-f8dfe2659e98" (UID: "acecffca-8dfb-4702-851a-f8dfe2659e98"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.001364 4919 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.001626 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw226\" (UniqueName: \"kubernetes.io/projected/acecffca-8dfb-4702-851a-f8dfe2659e98-kube-api-access-zw226\") on node \"crc\" DevicePath \"\"" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.001721 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.001868 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.001927 4919 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acecffca-8dfb-4702-851a-f8dfe2659e98-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.387932 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" event={"ID":"acecffca-8dfb-4702-851a-f8dfe2659e98","Type":"ContainerDied","Data":"60de3f63468c65092e193cf50a268d13a622219d413b2c672d14900f44e9107e"} Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.388011 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60de3f63468c65092e193cf50a268d13a622219d413b2c672d14900f44e9107e" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.388012 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k82m6" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.494952 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9"] Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.495740 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerName="extract-content" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.495763 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerName="extract-content" Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.495780 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerName="extract-utilities" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.495787 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerName="extract-utilities" Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.495803 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerName="extract-content" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.495811 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerName="extract-content" Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.495823 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerName="extract-utilities" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.495829 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerName="extract-utilities" Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.495840 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ae1425-7567-4310-a419-5a0103747339" containerName="extract-utilities" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.495845 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ae1425-7567-4310-a419-5a0103747339" containerName="extract-utilities" Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.495858 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerName="registry-server" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.495863 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerName="registry-server" Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.495883 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ae1425-7567-4310-a419-5a0103747339" containerName="registry-server" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.495889 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ae1425-7567-4310-a419-5a0103747339" containerName="registry-server" Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.495899 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acecffca-8dfb-4702-851a-f8dfe2659e98" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.495907 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="acecffca-8dfb-4702-851a-f8dfe2659e98" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.495919 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ae1425-7567-4310-a419-5a0103747339" containerName="extract-content" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.495925 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ae1425-7567-4310-a419-5a0103747339" containerName="extract-content" Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.495938 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerName="registry-server" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.495944 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerName="registry-server" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.496154 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ae1425-7567-4310-a419-5a0103747339" containerName="registry-server" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.496347 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce4ea03-b5cd-4a55-93be-08bbc712745f" containerName="registry-server" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.496357 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="a48d6b53-344c-4ca2-bb5a-4cb5fa446eb2" containerName="registry-server" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.496370 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="acecffca-8dfb-4702-851a-f8dfe2659e98" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.497040 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.499710 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.499910 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.500020 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.501403 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.501624 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.501806 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.502095 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.523807 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9"] Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.614622 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.614797 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.614822 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.614851 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.614914 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.614961 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.615146 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.615199 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99d95\" (UniqueName: \"kubernetes.io/projected/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-kube-api-access-99d95\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.615278 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.716564 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.716636 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.716663 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.716714 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99d95\" (UniqueName: \"kubernetes.io/projected/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-kube-api-access-99d95\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.716780 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.717521 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.717594 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.717619 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.717655 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.718489 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.721249 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.721507 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.721568 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.722449 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.722732 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.727451 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.730115 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.735986 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99d95\" (UniqueName: \"kubernetes.io/projected/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-kube-api-access-99d95\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v9kt9\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.752247 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:12:46 crc kubenswrapper[4919]: E0109 14:12:46.752706 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:12:46 crc kubenswrapper[4919]: I0109 14:12:46.815557 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:12:47 crc kubenswrapper[4919]: I0109 14:12:47.445542 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9"] Jan 09 14:12:48 crc kubenswrapper[4919]: I0109 14:12:48.405314 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" event={"ID":"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1","Type":"ContainerStarted","Data":"d8351a742f98b5b875a5463ee2795951bb67f30551e24465721b3ff3bf73d644"} Jan 09 14:12:48 crc kubenswrapper[4919]: I0109 14:12:48.405815 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" event={"ID":"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1","Type":"ContainerStarted","Data":"06a5837035c66469322a4ba0eb6f79a97dd48288d776c86a34c5be66e03ff383"} Jan 09 14:12:48 crc kubenswrapper[4919]: I0109 14:12:48.432488 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" podStartSLOduration=1.866358896 podStartE2EDuration="2.432463829s" podCreationTimestamp="2026-01-09 14:12:46 +0000 UTC" firstStartedPulling="2026-01-09 14:12:47.449642633 +0000 UTC m=+2546.997482083" lastFinishedPulling="2026-01-09 14:12:48.015747566 +0000 UTC m=+2547.563587016" observedRunningTime="2026-01-09 14:12:48.421007777 +0000 UTC m=+2547.968847237" watchObservedRunningTime="2026-01-09 14:12:48.432463829 +0000 UTC m=+2547.980303279" Jan 09 14:13:00 crc kubenswrapper[4919]: I0109 14:13:00.771992 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:13:00 crc kubenswrapper[4919]: E0109 14:13:00.773333 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:13:14 crc kubenswrapper[4919]: I0109 14:13:14.752017 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:13:14 crc kubenswrapper[4919]: E0109 14:13:14.754256 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:13:27 crc kubenswrapper[4919]: I0109 14:13:27.752540 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:13:28 crc kubenswrapper[4919]: I0109 14:13:28.745409 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"679d9025ae6777d87901a54436516242183495cc09d48edeeb0c1ab27d036468"} Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.144448 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6"] Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.146810 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.149840 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.150059 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.157798 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6"] Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.299960 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4kxm\" (UniqueName: \"kubernetes.io/projected/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-kube-api-access-k4kxm\") pod \"collect-profiles-29466135-2kbq6\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.300075 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-config-volume\") pod \"collect-profiles-29466135-2kbq6\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.300255 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-secret-volume\") pod \"collect-profiles-29466135-2kbq6\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.401781 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4kxm\" (UniqueName: \"kubernetes.io/projected/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-kube-api-access-k4kxm\") pod \"collect-profiles-29466135-2kbq6\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.401871 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-config-volume\") pod \"collect-profiles-29466135-2kbq6\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.402017 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-secret-volume\") pod \"collect-profiles-29466135-2kbq6\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.403020 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-config-volume\") pod \"collect-profiles-29466135-2kbq6\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.415094 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-secret-volume\") pod \"collect-profiles-29466135-2kbq6\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.419691 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4kxm\" (UniqueName: \"kubernetes.io/projected/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-kube-api-access-k4kxm\") pod \"collect-profiles-29466135-2kbq6\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.478737 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:00 crc kubenswrapper[4919]: I0109 14:15:00.923830 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6"] Jan 09 14:15:01 crc kubenswrapper[4919]: I0109 14:15:01.550927 4919 generic.go:334] "Generic (PLEG): container finished" podID="e8bbed1d-4d6d-448c-88bd-18b6b5c02825" containerID="bf01434f0f7eb2da53fae539f316edcaabca0943d585d9986450b38ed08125b5" exitCode=0 Jan 09 14:15:01 crc kubenswrapper[4919]: I0109 14:15:01.550952 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" event={"ID":"e8bbed1d-4d6d-448c-88bd-18b6b5c02825","Type":"ContainerDied","Data":"bf01434f0f7eb2da53fae539f316edcaabca0943d585d9986450b38ed08125b5"} Jan 09 14:15:01 crc kubenswrapper[4919]: I0109 14:15:01.552352 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" event={"ID":"e8bbed1d-4d6d-448c-88bd-18b6b5c02825","Type":"ContainerStarted","Data":"424871c02532b0e2997fb8cded94d1bae0c58bef5efdd14dd55a2fe3816805d5"} Jan 09 14:15:02 crc kubenswrapper[4919]: I0109 14:15:02.879432 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.056253 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4kxm\" (UniqueName: \"kubernetes.io/projected/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-kube-api-access-k4kxm\") pod \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.056745 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-secret-volume\") pod \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.056783 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-config-volume\") pod \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\" (UID: \"e8bbed1d-4d6d-448c-88bd-18b6b5c02825\") " Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.057489 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8bbed1d-4d6d-448c-88bd-18b6b5c02825" (UID: "e8bbed1d-4d6d-448c-88bd-18b6b5c02825"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.057676 4919 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.063066 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8bbed1d-4d6d-448c-88bd-18b6b5c02825" (UID: "e8bbed1d-4d6d-448c-88bd-18b6b5c02825"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.063108 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-kube-api-access-k4kxm" (OuterVolumeSpecName: "kube-api-access-k4kxm") pod "e8bbed1d-4d6d-448c-88bd-18b6b5c02825" (UID: "e8bbed1d-4d6d-448c-88bd-18b6b5c02825"). InnerVolumeSpecName "kube-api-access-k4kxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.160023 4919 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.160160 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4kxm\" (UniqueName: \"kubernetes.io/projected/e8bbed1d-4d6d-448c-88bd-18b6b5c02825-kube-api-access-k4kxm\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.574936 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" event={"ID":"e8bbed1d-4d6d-448c-88bd-18b6b5c02825","Type":"ContainerDied","Data":"424871c02532b0e2997fb8cded94d1bae0c58bef5efdd14dd55a2fe3816805d5"} Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.575010 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="424871c02532b0e2997fb8cded94d1bae0c58bef5efdd14dd55a2fe3816805d5" Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.575465 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466135-2kbq6" Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.969937 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw"] Jan 09 14:15:03 crc kubenswrapper[4919]: I0109 14:15:03.978775 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466090-q6cvw"] Jan 09 14:15:04 crc kubenswrapper[4919]: I0109 14:15:04.768268 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607f4472-6658-48ef-ba52-4b6b097eaa2e" path="/var/lib/kubelet/pods/607f4472-6658-48ef-ba52-4b6b097eaa2e/volumes" Jan 09 14:15:08 crc kubenswrapper[4919]: I0109 14:15:08.616142 4919 generic.go:334] "Generic (PLEG): container finished" podID="cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" containerID="d8351a742f98b5b875a5463ee2795951bb67f30551e24465721b3ff3bf73d644" exitCode=0 Jan 09 14:15:08 crc kubenswrapper[4919]: I0109 14:15:08.616253 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" event={"ID":"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1","Type":"ContainerDied","Data":"d8351a742f98b5b875a5463ee2795951bb67f30551e24465721b3ff3bf73d644"} Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.052400 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.189706 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-extra-config-0\") pod \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.189765 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-0\") pod \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.189886 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-1\") pod \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.189908 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-combined-ca-bundle\") pod \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.189963 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99d95\" (UniqueName: \"kubernetes.io/projected/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-kube-api-access-99d95\") pod \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.190043 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-1\") pod \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.190074 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-ssh-key-openstack-edpm-ipam\") pod \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.190097 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-inventory\") pod \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.190129 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-0\") pod \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\" (UID: \"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1\") " Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.196511 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" (UID: "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.197064 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-kube-api-access-99d95" (OuterVolumeSpecName: "kube-api-access-99d95") pod "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" (UID: "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1"). InnerVolumeSpecName "kube-api-access-99d95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.220336 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" (UID: "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.221576 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" (UID: "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.222824 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" (UID: "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.223957 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" (UID: "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.225833 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-inventory" (OuterVolumeSpecName: "inventory") pod "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" (UID: "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.234121 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" (UID: "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.241055 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" (UID: "cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.292116 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99d95\" (UniqueName: \"kubernetes.io/projected/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-kube-api-access-99d95\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.292390 4919 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.292479 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.292540 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.292699 4919 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.292761 4919 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.292829 4919 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.292882 4919 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.292940 4919 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.635921 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" event={"ID":"cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1","Type":"ContainerDied","Data":"06a5837035c66469322a4ba0eb6f79a97dd48288d776c86a34c5be66e03ff383"} Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.636265 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a5837035c66469322a4ba0eb6f79a97dd48288d776c86a34c5be66e03ff383" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.635983 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v9kt9" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.719189 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6"] Jan 09 14:15:10 crc kubenswrapper[4919]: E0109 14:15:10.719813 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bbed1d-4d6d-448c-88bd-18b6b5c02825" containerName="collect-profiles" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.719836 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bbed1d-4d6d-448c-88bd-18b6b5c02825" containerName="collect-profiles" Jan 09 14:15:10 crc kubenswrapper[4919]: E0109 14:15:10.719893 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.719904 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.720155 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bbed1d-4d6d-448c-88bd-18b6b5c02825" containerName="collect-profiles" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.720192 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.721019 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.723951 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.724314 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.724586 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.724656 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.729114 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6"] Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.729176 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69fb8" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.801338 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.801449 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crtqg\" (UniqueName: \"kubernetes.io/projected/1397ace9-1e0e-4acc-b043-3e1f13244746-kube-api-access-crtqg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.801517 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.801546 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.801663 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.801709 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.801742 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.903448 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.903538 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.903578 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crtqg\" (UniqueName: \"kubernetes.io/projected/1397ace9-1e0e-4acc-b043-3e1f13244746-kube-api-access-crtqg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.903616 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.903642 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.903715 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.903740 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.908606 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.908660 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.909709 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.910141 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.910602 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.911331 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:10 crc kubenswrapper[4919]: I0109 14:15:10.921339 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crtqg\" (UniqueName: \"kubernetes.io/projected/1397ace9-1e0e-4acc-b043-3e1f13244746-kube-api-access-crtqg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:11 crc kubenswrapper[4919]: I0109 14:15:11.037490 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:15:11 crc kubenswrapper[4919]: I0109 14:15:11.663132 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6"] Jan 09 14:15:11 crc kubenswrapper[4919]: I0109 14:15:11.670706 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 14:15:12 crc kubenswrapper[4919]: I0109 14:15:12.655077 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" event={"ID":"1397ace9-1e0e-4acc-b043-3e1f13244746","Type":"ContainerStarted","Data":"0560a7e89e77e48698cf98e6523120c5a1113397232eb9bb4613e3faca82fcb5"} Jan 09 14:15:13 crc kubenswrapper[4919]: I0109 14:15:13.665673 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" event={"ID":"1397ace9-1e0e-4acc-b043-3e1f13244746","Type":"ContainerStarted","Data":"b1e8295d7a16530f0361fede702219f84cb9a180336ec99a65bcca2aeb94460a"} Jan 09 14:15:13 crc kubenswrapper[4919]: I0109 14:15:13.692000 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" podStartSLOduration=2.836993229 podStartE2EDuration="3.691982972s" podCreationTimestamp="2026-01-09 14:15:10 +0000 UTC" firstStartedPulling="2026-01-09 14:15:11.670392558 +0000 UTC m=+2691.218232008" lastFinishedPulling="2026-01-09 14:15:12.525382301 +0000 UTC m=+2692.073221751" observedRunningTime="2026-01-09 14:15:13.690484754 +0000 UTC m=+2693.238324204" watchObservedRunningTime="2026-01-09 14:15:13.691982972 +0000 UTC m=+2693.239822412" Jan 09 14:15:43 crc kubenswrapper[4919]: I0109 14:15:43.801912 4919 scope.go:117] "RemoveContainer" containerID="24a4cc1664f94dac46c1fdff979b2d16a4d15968cd735a7bd07c70d5deac7ca4" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.069457 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-njw5c"] Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.072421 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.080760 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njw5c"] Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.123314 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-catalog-content\") pod \"redhat-operators-njw5c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.123395 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-utilities\") pod \"redhat-operators-njw5c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.123725 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7mn\" (UniqueName: \"kubernetes.io/projected/dd688222-fc4b-40c8-8f03-78a91a7a671c-kube-api-access-gc7mn\") pod \"redhat-operators-njw5c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.226053 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-utilities\") pod \"redhat-operators-njw5c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.226296 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc7mn\" (UniqueName: \"kubernetes.io/projected/dd688222-fc4b-40c8-8f03-78a91a7a671c-kube-api-access-gc7mn\") pod \"redhat-operators-njw5c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.226789 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-catalog-content\") pod \"redhat-operators-njw5c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.226822 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-utilities\") pod \"redhat-operators-njw5c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.227430 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-catalog-content\") pod \"redhat-operators-njw5c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.292604 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc7mn\" (UniqueName: \"kubernetes.io/projected/dd688222-fc4b-40c8-8f03-78a91a7a671c-kube-api-access-gc7mn\") pod \"redhat-operators-njw5c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.394673 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.919003 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njw5c"] Jan 09 14:15:50 crc kubenswrapper[4919]: I0109 14:15:50.984713 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njw5c" event={"ID":"dd688222-fc4b-40c8-8f03-78a91a7a671c","Type":"ContainerStarted","Data":"18de545e6fc573bd09bb89712a3ef223ca1803826aad99f93b88a8a29a2b50cb"} Jan 09 14:15:51 crc kubenswrapper[4919]: I0109 14:15:51.246775 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:15:51 crc kubenswrapper[4919]: I0109 14:15:51.247116 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:15:51 crc kubenswrapper[4919]: I0109 14:15:51.996363 4919 generic.go:334] "Generic (PLEG): container finished" podID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerID="65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046" exitCode=0 Jan 09 14:15:51 crc kubenswrapper[4919]: I0109 14:15:51.996409 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njw5c" event={"ID":"dd688222-fc4b-40c8-8f03-78a91a7a671c","Type":"ContainerDied","Data":"65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046"} Jan 09 14:15:54 crc kubenswrapper[4919]: I0109 14:15:54.018104 4919 generic.go:334] "Generic (PLEG): container finished" podID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerID="19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f" exitCode=0 Jan 09 14:15:54 crc kubenswrapper[4919]: I0109 14:15:54.018271 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njw5c" event={"ID":"dd688222-fc4b-40c8-8f03-78a91a7a671c","Type":"ContainerDied","Data":"19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f"} Jan 09 14:15:56 crc kubenswrapper[4919]: I0109 14:15:56.038078 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njw5c" event={"ID":"dd688222-fc4b-40c8-8f03-78a91a7a671c","Type":"ContainerStarted","Data":"33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d"} Jan 09 14:15:56 crc kubenswrapper[4919]: I0109 14:15:56.060160 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-njw5c" podStartSLOduration=3.215002719 podStartE2EDuration="6.060135736s" podCreationTimestamp="2026-01-09 14:15:50 +0000 UTC" firstStartedPulling="2026-01-09 14:15:51.998445146 +0000 UTC m=+2731.546284606" lastFinishedPulling="2026-01-09 14:15:54.843578173 +0000 UTC m=+2734.391417623" observedRunningTime="2026-01-09 14:15:56.05280034 +0000 UTC m=+2735.600639800" watchObservedRunningTime="2026-01-09 14:15:56.060135736 +0000 UTC m=+2735.607975186" Jan 09 14:16:00 crc kubenswrapper[4919]: I0109 14:16:00.395124 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:16:00 crc kubenswrapper[4919]: I0109 14:16:00.395393 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:16:00 crc kubenswrapper[4919]: I0109 14:16:00.455539 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:16:01 crc kubenswrapper[4919]: I0109 14:16:01.196165 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:16:02 crc kubenswrapper[4919]: I0109 14:16:02.818147 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-njw5c"] Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.098247 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-njw5c" podUID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerName="registry-server" containerID="cri-o://33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d" gracePeriod=2 Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.578775 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.703320 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc7mn\" (UniqueName: \"kubernetes.io/projected/dd688222-fc4b-40c8-8f03-78a91a7a671c-kube-api-access-gc7mn\") pod \"dd688222-fc4b-40c8-8f03-78a91a7a671c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.703534 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-catalog-content\") pod \"dd688222-fc4b-40c8-8f03-78a91a7a671c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.703770 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-utilities\") pod \"dd688222-fc4b-40c8-8f03-78a91a7a671c\" (UID: \"dd688222-fc4b-40c8-8f03-78a91a7a671c\") " Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.704536 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-utilities" (OuterVolumeSpecName: "utilities") pod "dd688222-fc4b-40c8-8f03-78a91a7a671c" (UID: "dd688222-fc4b-40c8-8f03-78a91a7a671c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.710036 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd688222-fc4b-40c8-8f03-78a91a7a671c-kube-api-access-gc7mn" (OuterVolumeSpecName: "kube-api-access-gc7mn") pod "dd688222-fc4b-40c8-8f03-78a91a7a671c" (UID: "dd688222-fc4b-40c8-8f03-78a91a7a671c"). InnerVolumeSpecName "kube-api-access-gc7mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.806355 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.806408 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc7mn\" (UniqueName: \"kubernetes.io/projected/dd688222-fc4b-40c8-8f03-78a91a7a671c-kube-api-access-gc7mn\") on node \"crc\" DevicePath \"\"" Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.826151 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd688222-fc4b-40c8-8f03-78a91a7a671c" (UID: "dd688222-fc4b-40c8-8f03-78a91a7a671c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:16:03 crc kubenswrapper[4919]: I0109 14:16:03.908445 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd688222-fc4b-40c8-8f03-78a91a7a671c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.111940 4919 generic.go:334] "Generic (PLEG): container finished" podID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerID="33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d" exitCode=0 Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.112002 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njw5c" event={"ID":"dd688222-fc4b-40c8-8f03-78a91a7a671c","Type":"ContainerDied","Data":"33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d"} Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.112026 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njw5c" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.112054 4919 scope.go:117] "RemoveContainer" containerID="33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.112040 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njw5c" event={"ID":"dd688222-fc4b-40c8-8f03-78a91a7a671c","Type":"ContainerDied","Data":"18de545e6fc573bd09bb89712a3ef223ca1803826aad99f93b88a8a29a2b50cb"} Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.133319 4919 scope.go:117] "RemoveContainer" containerID="19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.150044 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-njw5c"] Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.158041 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-njw5c"] Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.179976 4919 scope.go:117] "RemoveContainer" containerID="65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.202942 4919 scope.go:117] "RemoveContainer" containerID="33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d" Jan 09 14:16:04 crc kubenswrapper[4919]: E0109 14:16:04.203520 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d\": container with ID starting with 33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d not found: ID does not exist" containerID="33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.203651 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d"} err="failed to get container status \"33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d\": rpc error: code = NotFound desc = could not find container \"33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d\": container with ID starting with 33e5cd33cee76e91e505239a74fb9db1c08c72892fabc7324b2cab4d1be58a1d not found: ID does not exist" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.203759 4919 scope.go:117] "RemoveContainer" containerID="19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f" Jan 09 14:16:04 crc kubenswrapper[4919]: E0109 14:16:04.204398 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f\": container with ID starting with 19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f not found: ID does not exist" containerID="19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.204435 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f"} err="failed to get container status \"19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f\": rpc error: code = NotFound desc = could not find container \"19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f\": container with ID starting with 19575d3435957de4b949514bcc91922c6ba8adcf745c28fb43d0dba7057bd72f not found: ID does not exist" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.204458 4919 scope.go:117] "RemoveContainer" containerID="65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046" Jan 09 14:16:04 crc kubenswrapper[4919]: E0109 14:16:04.204795 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046\": container with ID starting with 65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046 not found: ID does not exist" containerID="65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.204995 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046"} err="failed to get container status \"65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046\": rpc error: code = NotFound desc = could not find container \"65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046\": container with ID starting with 65eb6ff09f978137f0eb3b63059d3d279376035648e415c778a5d577b877c046 not found: ID does not exist" Jan 09 14:16:04 crc kubenswrapper[4919]: I0109 14:16:04.761901 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd688222-fc4b-40c8-8f03-78a91a7a671c" path="/var/lib/kubelet/pods/dd688222-fc4b-40c8-8f03-78a91a7a671c/volumes" Jan 09 14:16:21 crc kubenswrapper[4919]: I0109 14:16:21.246964 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:16:21 crc kubenswrapper[4919]: I0109 14:16:21.247529 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:16:51 crc kubenswrapper[4919]: I0109 14:16:51.247080 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:16:51 crc kubenswrapper[4919]: I0109 14:16:51.247714 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:16:51 crc kubenswrapper[4919]: I0109 14:16:51.247804 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 14:16:51 crc kubenswrapper[4919]: I0109 14:16:51.248793 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"679d9025ae6777d87901a54436516242183495cc09d48edeeb0c1ab27d036468"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 14:16:51 crc kubenswrapper[4919]: I0109 14:16:51.248868 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://679d9025ae6777d87901a54436516242183495cc09d48edeeb0c1ab27d036468" gracePeriod=600 Jan 09 14:16:51 crc kubenswrapper[4919]: I0109 14:16:51.545963 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="679d9025ae6777d87901a54436516242183495cc09d48edeeb0c1ab27d036468" exitCode=0 Jan 09 14:16:51 crc kubenswrapper[4919]: I0109 14:16:51.546033 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"679d9025ae6777d87901a54436516242183495cc09d48edeeb0c1ab27d036468"} Jan 09 14:16:51 crc kubenswrapper[4919]: I0109 14:16:51.546335 4919 scope.go:117] "RemoveContainer" containerID="cb9cc8141ac739eabcc1054e82a3a41d2dac68fb980f94b01a302b9c1cfa7794" Jan 09 14:16:52 crc kubenswrapper[4919]: I0109 14:16:52.558152 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f"} Jan 09 14:17:48 crc kubenswrapper[4919]: I0109 14:17:48.047202 4919 generic.go:334] "Generic (PLEG): container finished" podID="1397ace9-1e0e-4acc-b043-3e1f13244746" containerID="b1e8295d7a16530f0361fede702219f84cb9a180336ec99a65bcca2aeb94460a" exitCode=0 Jan 09 14:17:48 crc kubenswrapper[4919]: I0109 14:17:48.047250 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" event={"ID":"1397ace9-1e0e-4acc-b043-3e1f13244746","Type":"ContainerDied","Data":"b1e8295d7a16530f0361fede702219f84cb9a180336ec99a65bcca2aeb94460a"} Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.431074 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.531720 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-1\") pod \"1397ace9-1e0e-4acc-b043-3e1f13244746\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.532158 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-inventory\") pod \"1397ace9-1e0e-4acc-b043-3e1f13244746\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.532232 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-0\") pod \"1397ace9-1e0e-4acc-b043-3e1f13244746\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.532285 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crtqg\" (UniqueName: \"kubernetes.io/projected/1397ace9-1e0e-4acc-b043-3e1f13244746-kube-api-access-crtqg\") pod \"1397ace9-1e0e-4acc-b043-3e1f13244746\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.532353 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-2\") pod \"1397ace9-1e0e-4acc-b043-3e1f13244746\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.533199 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-telemetry-combined-ca-bundle\") pod \"1397ace9-1e0e-4acc-b043-3e1f13244746\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.533255 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ssh-key-openstack-edpm-ipam\") pod \"1397ace9-1e0e-4acc-b043-3e1f13244746\" (UID: \"1397ace9-1e0e-4acc-b043-3e1f13244746\") " Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.547197 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1397ace9-1e0e-4acc-b043-3e1f13244746-kube-api-access-crtqg" (OuterVolumeSpecName: "kube-api-access-crtqg") pod "1397ace9-1e0e-4acc-b043-3e1f13244746" (UID: "1397ace9-1e0e-4acc-b043-3e1f13244746"). InnerVolumeSpecName "kube-api-access-crtqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.549875 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1397ace9-1e0e-4acc-b043-3e1f13244746" (UID: "1397ace9-1e0e-4acc-b043-3e1f13244746"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.561107 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "1397ace9-1e0e-4acc-b043-3e1f13244746" (UID: "1397ace9-1e0e-4acc-b043-3e1f13244746"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.561841 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "1397ace9-1e0e-4acc-b043-3e1f13244746" (UID: "1397ace9-1e0e-4acc-b043-3e1f13244746"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.565634 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "1397ace9-1e0e-4acc-b043-3e1f13244746" (UID: "1397ace9-1e0e-4acc-b043-3e1f13244746"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.566055 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1397ace9-1e0e-4acc-b043-3e1f13244746" (UID: "1397ace9-1e0e-4acc-b043-3e1f13244746"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.575381 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-inventory" (OuterVolumeSpecName: "inventory") pod "1397ace9-1e0e-4acc-b043-3e1f13244746" (UID: "1397ace9-1e0e-4acc-b043-3e1f13244746"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.636208 4919 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.636252 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.636260 4919 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.636271 4919 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.636280 4919 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.636288 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crtqg\" (UniqueName: \"kubernetes.io/projected/1397ace9-1e0e-4acc-b043-3e1f13244746-kube-api-access-crtqg\") on node \"crc\" DevicePath \"\"" Jan 09 14:17:49 crc kubenswrapper[4919]: I0109 14:17:49.636300 4919 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1397ace9-1e0e-4acc-b043-3e1f13244746-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 09 14:17:50 crc kubenswrapper[4919]: I0109 14:17:50.068388 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" event={"ID":"1397ace9-1e0e-4acc-b043-3e1f13244746","Type":"ContainerDied","Data":"0560a7e89e77e48698cf98e6523120c5a1113397232eb9bb4613e3faca82fcb5"} Jan 09 14:17:50 crc kubenswrapper[4919]: I0109 14:17:50.068463 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0560a7e89e77e48698cf98e6523120c5a1113397232eb9bb4613e3faca82fcb5" Jan 09 14:17:50 crc kubenswrapper[4919]: I0109 14:17:50.068532 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.851838 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 09 14:18:48 crc kubenswrapper[4919]: E0109 14:18:48.852804 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerName="extract-utilities" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.852836 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerName="extract-utilities" Jan 09 14:18:48 crc kubenswrapper[4919]: E0109 14:18:48.852852 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerName="registry-server" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.852857 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerName="registry-server" Jan 09 14:18:48 crc kubenswrapper[4919]: E0109 14:18:48.852877 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1397ace9-1e0e-4acc-b043-3e1f13244746" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.852887 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="1397ace9-1e0e-4acc-b043-3e1f13244746" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 09 14:18:48 crc kubenswrapper[4919]: E0109 14:18:48.852918 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerName="extract-content" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.852925 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerName="extract-content" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.853185 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="1397ace9-1e0e-4acc-b043-3e1f13244746" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.853201 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd688222-fc4b-40c8-8f03-78a91a7a671c" containerName="registry-server" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.854173 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.857307 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.857503 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j6dqk" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.857314 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.860360 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.861180 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.966737 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.966783 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-config-data\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.966812 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.967560 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.968184 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s9dv\" (UniqueName: \"kubernetes.io/projected/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-kube-api-access-7s9dv\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.968275 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.968308 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.968334 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:48 crc kubenswrapper[4919]: I0109 14:18:48.968611 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.071310 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.071388 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.071414 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-config-data\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.071443 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.071487 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.071539 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s9dv\" (UniqueName: \"kubernetes.io/projected/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-kube-api-access-7s9dv\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.071572 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.071596 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.071618 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.071725 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.073245 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.073490 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.073644 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.075602 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-config-data\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.078112 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.078605 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.082038 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.093018 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s9dv\" (UniqueName: \"kubernetes.io/projected/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-kube-api-access-7s9dv\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.105453 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.193669 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 14:18:49 crc kubenswrapper[4919]: I0109 14:18:49.648440 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 09 14:18:50 crc kubenswrapper[4919]: I0109 14:18:50.581603 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea","Type":"ContainerStarted","Data":"be034418b63537d7bb6e696310968d77adb0185560cbe56673a679bb8f205600"} Jan 09 14:18:51 crc kubenswrapper[4919]: I0109 14:18:51.247182 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:18:51 crc kubenswrapper[4919]: I0109 14:18:51.247282 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.277754 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rxz2p"] Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.285315 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.289764 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxz2p"] Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.400553 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-catalog-content\") pod \"redhat-marketplace-rxz2p\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.400649 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8wq8\" (UniqueName: \"kubernetes.io/projected/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-kube-api-access-h8wq8\") pod \"redhat-marketplace-rxz2p\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.400693 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-utilities\") pod \"redhat-marketplace-rxz2p\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.502457 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-catalog-content\") pod \"redhat-marketplace-rxz2p\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.502543 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8wq8\" (UniqueName: \"kubernetes.io/projected/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-kube-api-access-h8wq8\") pod \"redhat-marketplace-rxz2p\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.502569 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-utilities\") pod \"redhat-marketplace-rxz2p\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.503106 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-catalog-content\") pod \"redhat-marketplace-rxz2p\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.503190 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-utilities\") pod \"redhat-marketplace-rxz2p\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.527807 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8wq8\" (UniqueName: \"kubernetes.io/projected/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-kube-api-access-h8wq8\") pod \"redhat-marketplace-rxz2p\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:18 crc kubenswrapper[4919]: I0109 14:19:18.626702 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:20 crc kubenswrapper[4919]: E0109 14:19:20.677004 4919 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 09 14:19:20 crc kubenswrapper[4919]: E0109 14:19:20.677503 4919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s9dv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(f53c17d7-be4d-4bcf-aea4-2617abf3d9ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 14:19:20 crc kubenswrapper[4919]: E0109 14:19:20.678743 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" Jan 09 14:19:20 crc kubenswrapper[4919]: I0109 14:19:20.838675 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxz2p"] Jan 09 14:19:20 crc kubenswrapper[4919]: I0109 14:19:20.872265 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxz2p" event={"ID":"92052ac0-1f17-4fb1-9694-cf4c0a600cb4","Type":"ContainerStarted","Data":"5537bf9be4b8d6198d26c57bd3c63990cc8373397f61f5bf58393033cdbad10e"} Jan 09 14:19:20 crc kubenswrapper[4919]: E0109 14:19:20.878087 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" Jan 09 14:19:21 crc kubenswrapper[4919]: I0109 14:19:21.246740 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:19:21 crc kubenswrapper[4919]: I0109 14:19:21.246838 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:19:21 crc kubenswrapper[4919]: I0109 14:19:21.887715 4919 generic.go:334] "Generic (PLEG): container finished" podID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerID="3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e" exitCode=0 Jan 09 14:19:21 crc kubenswrapper[4919]: I0109 14:19:21.887877 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxz2p" event={"ID":"92052ac0-1f17-4fb1-9694-cf4c0a600cb4","Type":"ContainerDied","Data":"3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e"} Jan 09 14:19:22 crc kubenswrapper[4919]: I0109 14:19:22.898368 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxz2p" event={"ID":"92052ac0-1f17-4fb1-9694-cf4c0a600cb4","Type":"ContainerStarted","Data":"94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb"} Jan 09 14:19:23 crc kubenswrapper[4919]: I0109 14:19:23.909176 4919 generic.go:334] "Generic (PLEG): container finished" podID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerID="94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb" exitCode=0 Jan 09 14:19:23 crc kubenswrapper[4919]: I0109 14:19:23.909262 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxz2p" event={"ID":"92052ac0-1f17-4fb1-9694-cf4c0a600cb4","Type":"ContainerDied","Data":"94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb"} Jan 09 14:19:24 crc kubenswrapper[4919]: I0109 14:19:24.920626 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxz2p" event={"ID":"92052ac0-1f17-4fb1-9694-cf4c0a600cb4","Type":"ContainerStarted","Data":"cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc"} Jan 09 14:19:24 crc kubenswrapper[4919]: I0109 14:19:24.948018 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rxz2p" podStartSLOduration=4.147997543 podStartE2EDuration="6.947987382s" podCreationTimestamp="2026-01-09 14:19:18 +0000 UTC" firstStartedPulling="2026-01-09 14:19:21.891036675 +0000 UTC m=+2941.438876125" lastFinishedPulling="2026-01-09 14:19:24.691026514 +0000 UTC m=+2944.238865964" observedRunningTime="2026-01-09 14:19:24.9404406 +0000 UTC m=+2944.488280060" watchObservedRunningTime="2026-01-09 14:19:24.947987382 +0000 UTC m=+2944.495826852" Jan 09 14:19:28 crc kubenswrapper[4919]: I0109 14:19:28.627304 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:28 crc kubenswrapper[4919]: I0109 14:19:28.627786 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:28 crc kubenswrapper[4919]: I0109 14:19:28.673013 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:35 crc kubenswrapper[4919]: I0109 14:19:35.382644 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 09 14:19:37 crc kubenswrapper[4919]: I0109 14:19:37.026544 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea","Type":"ContainerStarted","Data":"a1f043d3924c4d664d9d1b19bf6f24f0efe9ffcaebe14d6c0e010352697070eb"} Jan 09 14:19:38 crc kubenswrapper[4919]: I0109 14:19:38.675582 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:38 crc kubenswrapper[4919]: I0109 14:19:38.698207 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.971369949 podStartE2EDuration="51.698187605s" podCreationTimestamp="2026-01-09 14:18:47 +0000 UTC" firstStartedPulling="2026-01-09 14:18:49.652857507 +0000 UTC m=+2909.200696957" lastFinishedPulling="2026-01-09 14:19:35.379675163 +0000 UTC m=+2954.927514613" observedRunningTime="2026-01-09 14:19:37.045401024 +0000 UTC m=+2956.593240524" watchObservedRunningTime="2026-01-09 14:19:38.698187605 +0000 UTC m=+2958.246027055" Jan 09 14:19:38 crc kubenswrapper[4919]: I0109 14:19:38.727700 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxz2p"] Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.044596 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rxz2p" podUID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerName="registry-server" containerID="cri-o://cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc" gracePeriod=2 Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.499547 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.615958 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8wq8\" (UniqueName: \"kubernetes.io/projected/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-kube-api-access-h8wq8\") pod \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.616243 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-utilities\") pod \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.616440 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-catalog-content\") pod \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\" (UID: \"92052ac0-1f17-4fb1-9694-cf4c0a600cb4\") " Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.617289 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-utilities" (OuterVolumeSpecName: "utilities") pod "92052ac0-1f17-4fb1-9694-cf4c0a600cb4" (UID: "92052ac0-1f17-4fb1-9694-cf4c0a600cb4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.622916 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-kube-api-access-h8wq8" (OuterVolumeSpecName: "kube-api-access-h8wq8") pod "92052ac0-1f17-4fb1-9694-cf4c0a600cb4" (UID: "92052ac0-1f17-4fb1-9694-cf4c0a600cb4"). InnerVolumeSpecName "kube-api-access-h8wq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.642794 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92052ac0-1f17-4fb1-9694-cf4c0a600cb4" (UID: "92052ac0-1f17-4fb1-9694-cf4c0a600cb4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.718889 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.718923 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:19:39 crc kubenswrapper[4919]: I0109 14:19:39.718934 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8wq8\" (UniqueName: \"kubernetes.io/projected/92052ac0-1f17-4fb1-9694-cf4c0a600cb4-kube-api-access-h8wq8\") on node \"crc\" DevicePath \"\"" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.056118 4919 generic.go:334] "Generic (PLEG): container finished" podID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerID="cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc" exitCode=0 Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.056167 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxz2p" event={"ID":"92052ac0-1f17-4fb1-9694-cf4c0a600cb4","Type":"ContainerDied","Data":"cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc"} Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.056182 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxz2p" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.056201 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxz2p" event={"ID":"92052ac0-1f17-4fb1-9694-cf4c0a600cb4","Type":"ContainerDied","Data":"5537bf9be4b8d6198d26c57bd3c63990cc8373397f61f5bf58393033cdbad10e"} Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.056282 4919 scope.go:117] "RemoveContainer" containerID="cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.084075 4919 scope.go:117] "RemoveContainer" containerID="94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.094050 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxz2p"] Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.106644 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxz2p"] Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.110184 4919 scope.go:117] "RemoveContainer" containerID="3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.156816 4919 scope.go:117] "RemoveContainer" containerID="cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc" Jan 09 14:19:40 crc kubenswrapper[4919]: E0109 14:19:40.157804 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc\": container with ID starting with cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc not found: ID does not exist" containerID="cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.157848 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc"} err="failed to get container status \"cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc\": rpc error: code = NotFound desc = could not find container \"cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc\": container with ID starting with cd7ddc23e9f05dd837b98a78331e34de3b5467715e57a863b66a37eea9465acc not found: ID does not exist" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.157877 4919 scope.go:117] "RemoveContainer" containerID="94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb" Jan 09 14:19:40 crc kubenswrapper[4919]: E0109 14:19:40.158263 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb\": container with ID starting with 94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb not found: ID does not exist" containerID="94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.158478 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb"} err="failed to get container status \"94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb\": rpc error: code = NotFound desc = could not find container \"94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb\": container with ID starting with 94699e68a84057e0bfe05f7b58925cb86cac7a4b6450e822347ec93d1b1aa9eb not found: ID does not exist" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.158573 4919 scope.go:117] "RemoveContainer" containerID="3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e" Jan 09 14:19:40 crc kubenswrapper[4919]: E0109 14:19:40.159291 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e\": container with ID starting with 3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e not found: ID does not exist" containerID="3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.159457 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e"} err="failed to get container status \"3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e\": rpc error: code = NotFound desc = could not find container \"3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e\": container with ID starting with 3c295a65019232aabd660bab5879681044b84d0d97382231a6ff89020aace05e not found: ID does not exist" Jan 09 14:19:40 crc kubenswrapper[4919]: I0109 14:19:40.763845 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" path="/var/lib/kubelet/pods/92052ac0-1f17-4fb1-9694-cf4c0a600cb4/volumes" Jan 09 14:19:51 crc kubenswrapper[4919]: I0109 14:19:51.247431 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:19:51 crc kubenswrapper[4919]: I0109 14:19:51.247998 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:19:51 crc kubenswrapper[4919]: I0109 14:19:51.248046 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 14:19:51 crc kubenswrapper[4919]: I0109 14:19:51.248964 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 14:19:51 crc kubenswrapper[4919]: I0109 14:19:51.249021 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" gracePeriod=600 Jan 09 14:19:51 crc kubenswrapper[4919]: E0109 14:19:51.371097 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:19:52 crc kubenswrapper[4919]: I0109 14:19:52.173532 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" exitCode=0 Jan 09 14:19:52 crc kubenswrapper[4919]: I0109 14:19:52.173630 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f"} Jan 09 14:19:52 crc kubenswrapper[4919]: I0109 14:19:52.173869 4919 scope.go:117] "RemoveContainer" containerID="679d9025ae6777d87901a54436516242183495cc09d48edeeb0c1ab27d036468" Jan 09 14:19:52 crc kubenswrapper[4919]: I0109 14:19:52.175019 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:19:52 crc kubenswrapper[4919]: E0109 14:19:52.175352 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:20:05 crc kubenswrapper[4919]: I0109 14:20:05.751841 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:20:05 crc kubenswrapper[4919]: E0109 14:20:05.752667 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:20:18 crc kubenswrapper[4919]: I0109 14:20:18.752741 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:20:18 crc kubenswrapper[4919]: E0109 14:20:18.753551 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:20:30 crc kubenswrapper[4919]: I0109 14:20:30.757707 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:20:30 crc kubenswrapper[4919]: E0109 14:20:30.758494 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:20:42 crc kubenswrapper[4919]: I0109 14:20:42.752020 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:20:42 crc kubenswrapper[4919]: E0109 14:20:42.753972 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:20:55 crc kubenswrapper[4919]: I0109 14:20:55.751444 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:20:55 crc kubenswrapper[4919]: E0109 14:20:55.752280 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:21:07 crc kubenswrapper[4919]: I0109 14:21:07.751977 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:21:07 crc kubenswrapper[4919]: E0109 14:21:07.752831 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:21:18 crc kubenswrapper[4919]: I0109 14:21:18.752076 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:21:18 crc kubenswrapper[4919]: E0109 14:21:18.752820 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:21:30 crc kubenswrapper[4919]: I0109 14:21:30.758512 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:21:30 crc kubenswrapper[4919]: E0109 14:21:30.759338 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:21:42 crc kubenswrapper[4919]: I0109 14:21:42.752501 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:21:42 crc kubenswrapper[4919]: E0109 14:21:42.753289 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:21:57 crc kubenswrapper[4919]: I0109 14:21:57.752362 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:21:57 crc kubenswrapper[4919]: E0109 14:21:57.753071 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:22:11 crc kubenswrapper[4919]: I0109 14:22:11.751442 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:22:11 crc kubenswrapper[4919]: E0109 14:22:11.752290 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:22:22 crc kubenswrapper[4919]: I0109 14:22:22.752466 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:22:22 crc kubenswrapper[4919]: E0109 14:22:22.753288 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:22:34 crc kubenswrapper[4919]: I0109 14:22:34.751137 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:22:34 crc kubenswrapper[4919]: E0109 14:22:34.751899 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:22:49 crc kubenswrapper[4919]: I0109 14:22:49.752827 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:22:49 crc kubenswrapper[4919]: E0109 14:22:49.753514 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:23:00 crc kubenswrapper[4919]: I0109 14:23:00.757947 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:23:00 crc kubenswrapper[4919]: E0109 14:23:00.758565 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:23:12 crc kubenswrapper[4919]: I0109 14:23:12.752402 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:23:12 crc kubenswrapper[4919]: E0109 14:23:12.753303 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:23:23 crc kubenswrapper[4919]: I0109 14:23:23.752835 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:23:23 crc kubenswrapper[4919]: E0109 14:23:23.753899 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:23:37 crc kubenswrapper[4919]: I0109 14:23:37.751710 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:23:37 crc kubenswrapper[4919]: E0109 14:23:37.752613 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:23:48 crc kubenswrapper[4919]: I0109 14:23:48.752138 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:23:48 crc kubenswrapper[4919]: E0109 14:23:48.752904 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:23:59 crc kubenswrapper[4919]: I0109 14:23:59.751549 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:23:59 crc kubenswrapper[4919]: E0109 14:23:59.752459 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:24:14 crc kubenswrapper[4919]: I0109 14:24:14.751544 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:24:14 crc kubenswrapper[4919]: E0109 14:24:14.752531 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:24:27 crc kubenswrapper[4919]: I0109 14:24:27.752537 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:24:27 crc kubenswrapper[4919]: E0109 14:24:27.753503 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.673558 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-98bwz"] Jan 09 14:24:36 crc kubenswrapper[4919]: E0109 14:24:36.674482 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerName="extract-utilities" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.674498 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerName="extract-utilities" Jan 09 14:24:36 crc kubenswrapper[4919]: E0109 14:24:36.674524 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerName="registry-server" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.674530 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerName="registry-server" Jan 09 14:24:36 crc kubenswrapper[4919]: E0109 14:24:36.674544 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerName="extract-content" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.674550 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerName="extract-content" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.674765 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="92052ac0-1f17-4fb1-9694-cf4c0a600cb4" containerName="registry-server" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.677964 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.699808 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98bwz"] Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.791837 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-catalog-content\") pod \"community-operators-98bwz\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.792300 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-utilities\") pod \"community-operators-98bwz\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.792431 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s94j6\" (UniqueName: \"kubernetes.io/projected/c022834f-9ed0-41e6-9de7-63fffe64bdb8-kube-api-access-s94j6\") pod \"community-operators-98bwz\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.894482 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-utilities\") pod \"community-operators-98bwz\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.894960 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s94j6\" (UniqueName: \"kubernetes.io/projected/c022834f-9ed0-41e6-9de7-63fffe64bdb8-kube-api-access-s94j6\") pod \"community-operators-98bwz\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.895179 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-utilities\") pod \"community-operators-98bwz\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.895330 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-catalog-content\") pod \"community-operators-98bwz\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.896106 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-catalog-content\") pod \"community-operators-98bwz\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.915425 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s94j6\" (UniqueName: \"kubernetes.io/projected/c022834f-9ed0-41e6-9de7-63fffe64bdb8-kube-api-access-s94j6\") pod \"community-operators-98bwz\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:36 crc kubenswrapper[4919]: I0109 14:24:36.997234 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.611693 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98bwz"] Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.680805 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jvzdk"] Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.683653 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.692110 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jvzdk"] Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.821100 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6bc\" (UniqueName: \"kubernetes.io/projected/3cfabd20-7d08-408f-821d-c99dafa575ac-kube-api-access-5n6bc\") pod \"certified-operators-jvzdk\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.821191 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-catalog-content\") pod \"certified-operators-jvzdk\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.821245 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-utilities\") pod \"certified-operators-jvzdk\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.923315 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-catalog-content\") pod \"certified-operators-jvzdk\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.923669 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-utilities\") pod \"certified-operators-jvzdk\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.923873 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6bc\" (UniqueName: \"kubernetes.io/projected/3cfabd20-7d08-408f-821d-c99dafa575ac-kube-api-access-5n6bc\") pod \"certified-operators-jvzdk\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.923904 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-catalog-content\") pod \"certified-operators-jvzdk\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.924129 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-utilities\") pod \"certified-operators-jvzdk\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:37 crc kubenswrapper[4919]: I0109 14:24:37.946056 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6bc\" (UniqueName: \"kubernetes.io/projected/3cfabd20-7d08-408f-821d-c99dafa575ac-kube-api-access-5n6bc\") pod \"certified-operators-jvzdk\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:38 crc kubenswrapper[4919]: I0109 14:24:38.056346 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:38 crc kubenswrapper[4919]: I0109 14:24:38.568930 4919 generic.go:334] "Generic (PLEG): container finished" podID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerID="11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc" exitCode=0 Jan 09 14:24:38 crc kubenswrapper[4919]: I0109 14:24:38.569047 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98bwz" event={"ID":"c022834f-9ed0-41e6-9de7-63fffe64bdb8","Type":"ContainerDied","Data":"11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc"} Jan 09 14:24:38 crc kubenswrapper[4919]: I0109 14:24:38.569248 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98bwz" event={"ID":"c022834f-9ed0-41e6-9de7-63fffe64bdb8","Type":"ContainerStarted","Data":"e41da97bfd2b1eed64e255ed0fd967927b7e27e394b349899b3072e7ee42f443"} Jan 09 14:24:38 crc kubenswrapper[4919]: I0109 14:24:38.572101 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 14:24:38 crc kubenswrapper[4919]: I0109 14:24:38.586680 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jvzdk"] Jan 09 14:24:38 crc kubenswrapper[4919]: W0109 14:24:38.587349 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cfabd20_7d08_408f_821d_c99dafa575ac.slice/crio-0e87458fc4c87339f8d78221680063477ac355705bc22b1c25be482359fc080b WatchSource:0}: Error finding container 0e87458fc4c87339f8d78221680063477ac355705bc22b1c25be482359fc080b: Status 404 returned error can't find the container with id 0e87458fc4c87339f8d78221680063477ac355705bc22b1c25be482359fc080b Jan 09 14:24:39 crc kubenswrapper[4919]: I0109 14:24:39.580882 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98bwz" event={"ID":"c022834f-9ed0-41e6-9de7-63fffe64bdb8","Type":"ContainerStarted","Data":"9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53"} Jan 09 14:24:39 crc kubenswrapper[4919]: I0109 14:24:39.583272 4919 generic.go:334] "Generic (PLEG): container finished" podID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerID="486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d" exitCode=0 Jan 09 14:24:39 crc kubenswrapper[4919]: I0109 14:24:39.583307 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvzdk" event={"ID":"3cfabd20-7d08-408f-821d-c99dafa575ac","Type":"ContainerDied","Data":"486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d"} Jan 09 14:24:39 crc kubenswrapper[4919]: I0109 14:24:39.583326 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvzdk" event={"ID":"3cfabd20-7d08-408f-821d-c99dafa575ac","Type":"ContainerStarted","Data":"0e87458fc4c87339f8d78221680063477ac355705bc22b1c25be482359fc080b"} Jan 09 14:24:40 crc kubenswrapper[4919]: I0109 14:24:40.595071 4919 generic.go:334] "Generic (PLEG): container finished" podID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerID="9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53" exitCode=0 Jan 09 14:24:40 crc kubenswrapper[4919]: I0109 14:24:40.595323 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98bwz" event={"ID":"c022834f-9ed0-41e6-9de7-63fffe64bdb8","Type":"ContainerDied","Data":"9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53"} Jan 09 14:24:41 crc kubenswrapper[4919]: I0109 14:24:41.606389 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvzdk" event={"ID":"3cfabd20-7d08-408f-821d-c99dafa575ac","Type":"ContainerStarted","Data":"6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b"} Jan 09 14:24:42 crc kubenswrapper[4919]: I0109 14:24:42.618026 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98bwz" event={"ID":"c022834f-9ed0-41e6-9de7-63fffe64bdb8","Type":"ContainerStarted","Data":"4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c"} Jan 09 14:24:42 crc kubenswrapper[4919]: I0109 14:24:42.637643 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-98bwz" podStartSLOduration=3.939088673 podStartE2EDuration="6.637618341s" podCreationTimestamp="2026-01-09 14:24:36 +0000 UTC" firstStartedPulling="2026-01-09 14:24:38.571847467 +0000 UTC m=+3258.119686917" lastFinishedPulling="2026-01-09 14:24:41.270377135 +0000 UTC m=+3260.818216585" observedRunningTime="2026-01-09 14:24:42.633639929 +0000 UTC m=+3262.181479379" watchObservedRunningTime="2026-01-09 14:24:42.637618341 +0000 UTC m=+3262.185457791" Jan 09 14:24:42 crc kubenswrapper[4919]: I0109 14:24:42.751660 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:24:42 crc kubenswrapper[4919]: E0109 14:24:42.751977 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:24:43 crc kubenswrapper[4919]: I0109 14:24:43.629785 4919 generic.go:334] "Generic (PLEG): container finished" podID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerID="6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b" exitCode=0 Jan 09 14:24:43 crc kubenswrapper[4919]: I0109 14:24:43.630446 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvzdk" event={"ID":"3cfabd20-7d08-408f-821d-c99dafa575ac","Type":"ContainerDied","Data":"6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b"} Jan 09 14:24:44 crc kubenswrapper[4919]: I0109 14:24:44.642478 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvzdk" event={"ID":"3cfabd20-7d08-408f-821d-c99dafa575ac","Type":"ContainerStarted","Data":"9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed"} Jan 09 14:24:44 crc kubenswrapper[4919]: I0109 14:24:44.695625 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jvzdk" podStartSLOduration=3.230320243 podStartE2EDuration="7.695592742s" podCreationTimestamp="2026-01-09 14:24:37 +0000 UTC" firstStartedPulling="2026-01-09 14:24:39.585019914 +0000 UTC m=+3259.132859364" lastFinishedPulling="2026-01-09 14:24:44.050292413 +0000 UTC m=+3263.598131863" observedRunningTime="2026-01-09 14:24:44.672921355 +0000 UTC m=+3264.220760825" watchObservedRunningTime="2026-01-09 14:24:44.695592742 +0000 UTC m=+3264.243432202" Jan 09 14:24:46 crc kubenswrapper[4919]: I0109 14:24:46.997952 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:46 crc kubenswrapper[4919]: I0109 14:24:46.998544 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:47 crc kubenswrapper[4919]: I0109 14:24:47.056498 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:47 crc kubenswrapper[4919]: I0109 14:24:47.714225 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:48 crc kubenswrapper[4919]: I0109 14:24:48.056654 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:48 crc kubenswrapper[4919]: I0109 14:24:48.057010 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:48 crc kubenswrapper[4919]: I0109 14:24:48.107105 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:48 crc kubenswrapper[4919]: I0109 14:24:48.670919 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98bwz"] Jan 09 14:24:49 crc kubenswrapper[4919]: I0109 14:24:49.690648 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-98bwz" podUID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerName="registry-server" containerID="cri-o://4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c" gracePeriod=2 Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.200004 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.276877 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-catalog-content\") pod \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.276948 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-utilities\") pod \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.276988 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s94j6\" (UniqueName: \"kubernetes.io/projected/c022834f-9ed0-41e6-9de7-63fffe64bdb8-kube-api-access-s94j6\") pod \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\" (UID: \"c022834f-9ed0-41e6-9de7-63fffe64bdb8\") " Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.277733 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-utilities" (OuterVolumeSpecName: "utilities") pod "c022834f-9ed0-41e6-9de7-63fffe64bdb8" (UID: "c022834f-9ed0-41e6-9de7-63fffe64bdb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.303103 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c022834f-9ed0-41e6-9de7-63fffe64bdb8-kube-api-access-s94j6" (OuterVolumeSpecName: "kube-api-access-s94j6") pod "c022834f-9ed0-41e6-9de7-63fffe64bdb8" (UID: "c022834f-9ed0-41e6-9de7-63fffe64bdb8"). InnerVolumeSpecName "kube-api-access-s94j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.335192 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c022834f-9ed0-41e6-9de7-63fffe64bdb8" (UID: "c022834f-9ed0-41e6-9de7-63fffe64bdb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.380024 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.380059 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c022834f-9ed0-41e6-9de7-63fffe64bdb8-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.380070 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s94j6\" (UniqueName: \"kubernetes.io/projected/c022834f-9ed0-41e6-9de7-63fffe64bdb8-kube-api-access-s94j6\") on node \"crc\" DevicePath \"\"" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.701363 4919 generic.go:334] "Generic (PLEG): container finished" podID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerID="4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c" exitCode=0 Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.701413 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98bwz" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.701417 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98bwz" event={"ID":"c022834f-9ed0-41e6-9de7-63fffe64bdb8","Type":"ContainerDied","Data":"4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c"} Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.701448 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98bwz" event={"ID":"c022834f-9ed0-41e6-9de7-63fffe64bdb8","Type":"ContainerDied","Data":"e41da97bfd2b1eed64e255ed0fd967927b7e27e394b349899b3072e7ee42f443"} Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.701467 4919 scope.go:117] "RemoveContainer" containerID="4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.733133 4919 scope.go:117] "RemoveContainer" containerID="9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.745941 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98bwz"] Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.759882 4919 scope.go:117] "RemoveContainer" containerID="11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.764268 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-98bwz"] Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.808160 4919 scope.go:117] "RemoveContainer" containerID="4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c" Jan 09 14:24:50 crc kubenswrapper[4919]: E0109 14:24:50.808834 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c\": container with ID starting with 4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c not found: ID does not exist" containerID="4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.808865 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c"} err="failed to get container status \"4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c\": rpc error: code = NotFound desc = could not find container \"4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c\": container with ID starting with 4fecd7129dbd7096d8d1e38f3c16b7ab059f10060201e91909ba99db542f330c not found: ID does not exist" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.808891 4919 scope.go:117] "RemoveContainer" containerID="9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53" Jan 09 14:24:50 crc kubenswrapper[4919]: E0109 14:24:50.809254 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53\": container with ID starting with 9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53 not found: ID does not exist" containerID="9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.809279 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53"} err="failed to get container status \"9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53\": rpc error: code = NotFound desc = could not find container \"9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53\": container with ID starting with 9c6d039b3282533307a22569aef4052869ee886a0e8e54d1db3b332ccadabb53 not found: ID does not exist" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.809294 4919 scope.go:117] "RemoveContainer" containerID="11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc" Jan 09 14:24:50 crc kubenswrapper[4919]: E0109 14:24:50.809785 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc\": container with ID starting with 11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc not found: ID does not exist" containerID="11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc" Jan 09 14:24:50 crc kubenswrapper[4919]: I0109 14:24:50.809834 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc"} err="failed to get container status \"11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc\": rpc error: code = NotFound desc = could not find container \"11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc\": container with ID starting with 11c0b82ee92d20959ea07743203c713ffae75a57b7a847508ac3ffe1d548cccc not found: ID does not exist" Jan 09 14:24:52 crc kubenswrapper[4919]: I0109 14:24:52.762498 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" path="/var/lib/kubelet/pods/c022834f-9ed0-41e6-9de7-63fffe64bdb8/volumes" Jan 09 14:24:54 crc kubenswrapper[4919]: I0109 14:24:54.752191 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:24:55 crc kubenswrapper[4919]: I0109 14:24:55.748154 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"683030f1ec48c150eb8d5ec15c7c88597fd2694a114d1658e9cac33a1c47d1d5"} Jan 09 14:24:58 crc kubenswrapper[4919]: I0109 14:24:58.110965 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:58 crc kubenswrapper[4919]: I0109 14:24:58.160630 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jvzdk"] Jan 09 14:24:58 crc kubenswrapper[4919]: I0109 14:24:58.771282 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jvzdk" podUID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerName="registry-server" containerID="cri-o://9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed" gracePeriod=2 Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.283188 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.353315 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-catalog-content\") pod \"3cfabd20-7d08-408f-821d-c99dafa575ac\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.353458 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n6bc\" (UniqueName: \"kubernetes.io/projected/3cfabd20-7d08-408f-821d-c99dafa575ac-kube-api-access-5n6bc\") pod \"3cfabd20-7d08-408f-821d-c99dafa575ac\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.353587 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-utilities\") pod \"3cfabd20-7d08-408f-821d-c99dafa575ac\" (UID: \"3cfabd20-7d08-408f-821d-c99dafa575ac\") " Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.354785 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-utilities" (OuterVolumeSpecName: "utilities") pod "3cfabd20-7d08-408f-821d-c99dafa575ac" (UID: "3cfabd20-7d08-408f-821d-c99dafa575ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.359864 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfabd20-7d08-408f-821d-c99dafa575ac-kube-api-access-5n6bc" (OuterVolumeSpecName: "kube-api-access-5n6bc") pod "3cfabd20-7d08-408f-821d-c99dafa575ac" (UID: "3cfabd20-7d08-408f-821d-c99dafa575ac"). InnerVolumeSpecName "kube-api-access-5n6bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.408907 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cfabd20-7d08-408f-821d-c99dafa575ac" (UID: "3cfabd20-7d08-408f-821d-c99dafa575ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.456076 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.456111 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n6bc\" (UniqueName: \"kubernetes.io/projected/3cfabd20-7d08-408f-821d-c99dafa575ac-kube-api-access-5n6bc\") on node \"crc\" DevicePath \"\"" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.456123 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfabd20-7d08-408f-821d-c99dafa575ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.783658 4919 generic.go:334] "Generic (PLEG): container finished" podID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerID="9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed" exitCode=0 Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.783841 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvzdk" event={"ID":"3cfabd20-7d08-408f-821d-c99dafa575ac","Type":"ContainerDied","Data":"9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed"} Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.783983 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvzdk" event={"ID":"3cfabd20-7d08-408f-821d-c99dafa575ac","Type":"ContainerDied","Data":"0e87458fc4c87339f8d78221680063477ac355705bc22b1c25be482359fc080b"} Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.783921 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvzdk" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.784030 4919 scope.go:117] "RemoveContainer" containerID="9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.816517 4919 scope.go:117] "RemoveContainer" containerID="6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.824157 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jvzdk"] Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.833746 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jvzdk"] Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.843162 4919 scope.go:117] "RemoveContainer" containerID="486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.893377 4919 scope.go:117] "RemoveContainer" containerID="9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed" Jan 09 14:24:59 crc kubenswrapper[4919]: E0109 14:24:59.893940 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed\": container with ID starting with 9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed not found: ID does not exist" containerID="9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.893979 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed"} err="failed to get container status \"9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed\": rpc error: code = NotFound desc = could not find container \"9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed\": container with ID starting with 9074cfab206fdc46198e7bed43fb7433bd3d64ebd20d75d3f3c8a837937899ed not found: ID does not exist" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.894009 4919 scope.go:117] "RemoveContainer" containerID="6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b" Jan 09 14:24:59 crc kubenswrapper[4919]: E0109 14:24:59.895513 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b\": container with ID starting with 6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b not found: ID does not exist" containerID="6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.895546 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b"} err="failed to get container status \"6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b\": rpc error: code = NotFound desc = could not find container \"6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b\": container with ID starting with 6b69b89e465e41bdfb2ebf15a07117dfaa37aa0a06f2b1caaaa8f27c1cdf157b not found: ID does not exist" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.895565 4919 scope.go:117] "RemoveContainer" containerID="486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d" Jan 09 14:24:59 crc kubenswrapper[4919]: E0109 14:24:59.895956 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d\": container with ID starting with 486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d not found: ID does not exist" containerID="486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d" Jan 09 14:24:59 crc kubenswrapper[4919]: I0109 14:24:59.896012 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d"} err="failed to get container status \"486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d\": rpc error: code = NotFound desc = could not find container \"486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d\": container with ID starting with 486f7e21a6146880e21d0bc23151a831518f0f8b701363361dca57bf2ec7816d not found: ID does not exist" Jan 09 14:25:00 crc kubenswrapper[4919]: I0109 14:25:00.761934 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cfabd20-7d08-408f-821d-c99dafa575ac" path="/var/lib/kubelet/pods/3cfabd20-7d08-408f-821d-c99dafa575ac/volumes" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.264018 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2z2b"] Jan 09 14:26:21 crc kubenswrapper[4919]: E0109 14:26:21.265694 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerName="extract-utilities" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.265721 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerName="extract-utilities" Jan 09 14:26:21 crc kubenswrapper[4919]: E0109 14:26:21.265748 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerName="registry-server" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.265758 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerName="registry-server" Jan 09 14:26:21 crc kubenswrapper[4919]: E0109 14:26:21.265793 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerName="extract-utilities" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.265802 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerName="extract-utilities" Jan 09 14:26:21 crc kubenswrapper[4919]: E0109 14:26:21.265817 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerName="extract-content" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.265825 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerName="extract-content" Jan 09 14:26:21 crc kubenswrapper[4919]: E0109 14:26:21.265855 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerName="registry-server" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.265867 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerName="registry-server" Jan 09 14:26:21 crc kubenswrapper[4919]: E0109 14:26:21.265887 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerName="extract-content" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.265895 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerName="extract-content" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.266136 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfabd20-7d08-408f-821d-c99dafa575ac" containerName="registry-server" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.266164 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="c022834f-9ed0-41e6-9de7-63fffe64bdb8" containerName="registry-server" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.268168 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.274555 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2z2b"] Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.343720 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-catalog-content\") pod \"redhat-operators-j2z2b\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.344127 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5ws\" (UniqueName: \"kubernetes.io/projected/d52c923f-728d-4428-994e-57409e6e7f83-kube-api-access-7p5ws\") pod \"redhat-operators-j2z2b\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.344320 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-utilities\") pod \"redhat-operators-j2z2b\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.446542 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-utilities\") pod \"redhat-operators-j2z2b\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.446676 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-catalog-content\") pod \"redhat-operators-j2z2b\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.446809 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p5ws\" (UniqueName: \"kubernetes.io/projected/d52c923f-728d-4428-994e-57409e6e7f83-kube-api-access-7p5ws\") pod \"redhat-operators-j2z2b\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.447276 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-utilities\") pod \"redhat-operators-j2z2b\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.447388 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-catalog-content\") pod \"redhat-operators-j2z2b\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.487468 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p5ws\" (UniqueName: \"kubernetes.io/projected/d52c923f-728d-4428-994e-57409e6e7f83-kube-api-access-7p5ws\") pod \"redhat-operators-j2z2b\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:21 crc kubenswrapper[4919]: I0109 14:26:21.619528 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:22 crc kubenswrapper[4919]: I0109 14:26:22.115641 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2z2b"] Jan 09 14:26:22 crc kubenswrapper[4919]: I0109 14:26:22.516641 4919 generic.go:334] "Generic (PLEG): container finished" podID="d52c923f-728d-4428-994e-57409e6e7f83" containerID="edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf" exitCode=0 Jan 09 14:26:22 crc kubenswrapper[4919]: I0109 14:26:22.516721 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2z2b" event={"ID":"d52c923f-728d-4428-994e-57409e6e7f83","Type":"ContainerDied","Data":"edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf"} Jan 09 14:26:22 crc kubenswrapper[4919]: I0109 14:26:22.516959 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2z2b" event={"ID":"d52c923f-728d-4428-994e-57409e6e7f83","Type":"ContainerStarted","Data":"c968753d0286a49ef4b1f72514a964e0cca19ab97c6f6a2a102a5b1e1d3b0454"} Jan 09 14:26:23 crc kubenswrapper[4919]: I0109 14:26:23.527517 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2z2b" event={"ID":"d52c923f-728d-4428-994e-57409e6e7f83","Type":"ContainerStarted","Data":"e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca"} Jan 09 14:26:25 crc kubenswrapper[4919]: I0109 14:26:25.553528 4919 generic.go:334] "Generic (PLEG): container finished" podID="d52c923f-728d-4428-994e-57409e6e7f83" containerID="e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca" exitCode=0 Jan 09 14:26:25 crc kubenswrapper[4919]: I0109 14:26:25.553624 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2z2b" event={"ID":"d52c923f-728d-4428-994e-57409e6e7f83","Type":"ContainerDied","Data":"e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca"} Jan 09 14:26:28 crc kubenswrapper[4919]: I0109 14:26:28.582228 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2z2b" event={"ID":"d52c923f-728d-4428-994e-57409e6e7f83","Type":"ContainerStarted","Data":"23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6"} Jan 09 14:26:28 crc kubenswrapper[4919]: I0109 14:26:28.604572 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2z2b" podStartSLOduration=2.586616722 podStartE2EDuration="7.604549045s" podCreationTimestamp="2026-01-09 14:26:21 +0000 UTC" firstStartedPulling="2026-01-09 14:26:22.518917363 +0000 UTC m=+3362.066756813" lastFinishedPulling="2026-01-09 14:26:27.536849686 +0000 UTC m=+3367.084689136" observedRunningTime="2026-01-09 14:26:28.603630292 +0000 UTC m=+3368.151469752" watchObservedRunningTime="2026-01-09 14:26:28.604549045 +0000 UTC m=+3368.152388495" Jan 09 14:26:31 crc kubenswrapper[4919]: I0109 14:26:31.620123 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:31 crc kubenswrapper[4919]: I0109 14:26:31.620610 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:32 crc kubenswrapper[4919]: I0109 14:26:32.669526 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2z2b" podUID="d52c923f-728d-4428-994e-57409e6e7f83" containerName="registry-server" probeResult="failure" output=< Jan 09 14:26:32 crc kubenswrapper[4919]: timeout: failed to connect service ":50051" within 1s Jan 09 14:26:32 crc kubenswrapper[4919]: > Jan 09 14:26:41 crc kubenswrapper[4919]: I0109 14:26:41.667869 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:41 crc kubenswrapper[4919]: I0109 14:26:41.717740 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:41 crc kubenswrapper[4919]: I0109 14:26:41.905457 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2z2b"] Jan 09 14:26:42 crc kubenswrapper[4919]: I0109 14:26:42.691309 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2z2b" podUID="d52c923f-728d-4428-994e-57409e6e7f83" containerName="registry-server" containerID="cri-o://23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6" gracePeriod=2 Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.283829 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.419476 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-utilities\") pod \"d52c923f-728d-4428-994e-57409e6e7f83\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.419734 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-catalog-content\") pod \"d52c923f-728d-4428-994e-57409e6e7f83\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.419875 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p5ws\" (UniqueName: \"kubernetes.io/projected/d52c923f-728d-4428-994e-57409e6e7f83-kube-api-access-7p5ws\") pod \"d52c923f-728d-4428-994e-57409e6e7f83\" (UID: \"d52c923f-728d-4428-994e-57409e6e7f83\") " Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.420767 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-utilities" (OuterVolumeSpecName: "utilities") pod "d52c923f-728d-4428-994e-57409e6e7f83" (UID: "d52c923f-728d-4428-994e-57409e6e7f83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.432431 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52c923f-728d-4428-994e-57409e6e7f83-kube-api-access-7p5ws" (OuterVolumeSpecName: "kube-api-access-7p5ws") pod "d52c923f-728d-4428-994e-57409e6e7f83" (UID: "d52c923f-728d-4428-994e-57409e6e7f83"). InnerVolumeSpecName "kube-api-access-7p5ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.522322 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p5ws\" (UniqueName: \"kubernetes.io/projected/d52c923f-728d-4428-994e-57409e6e7f83-kube-api-access-7p5ws\") on node \"crc\" DevicePath \"\"" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.522609 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.534801 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d52c923f-728d-4428-994e-57409e6e7f83" (UID: "d52c923f-728d-4428-994e-57409e6e7f83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.624155 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d52c923f-728d-4428-994e-57409e6e7f83-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.700147 4919 generic.go:334] "Generic (PLEG): container finished" podID="d52c923f-728d-4428-994e-57409e6e7f83" containerID="23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6" exitCode=0 Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.700224 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2z2b" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.700237 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2z2b" event={"ID":"d52c923f-728d-4428-994e-57409e6e7f83","Type":"ContainerDied","Data":"23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6"} Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.701426 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2z2b" event={"ID":"d52c923f-728d-4428-994e-57409e6e7f83","Type":"ContainerDied","Data":"c968753d0286a49ef4b1f72514a964e0cca19ab97c6f6a2a102a5b1e1d3b0454"} Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.701453 4919 scope.go:117] "RemoveContainer" containerID="23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.724063 4919 scope.go:117] "RemoveContainer" containerID="e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.734515 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2z2b"] Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.745946 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2z2b"] Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.767617 4919 scope.go:117] "RemoveContainer" containerID="edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.801589 4919 scope.go:117] "RemoveContainer" containerID="23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6" Jan 09 14:26:43 crc kubenswrapper[4919]: E0109 14:26:43.802140 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6\": container with ID starting with 23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6 not found: ID does not exist" containerID="23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.802183 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6"} err="failed to get container status \"23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6\": rpc error: code = NotFound desc = could not find container \"23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6\": container with ID starting with 23f3f9d7553363d8640bec3e7474a7e58844e72c2c60ea0d164af9b28fdcf1d6 not found: ID does not exist" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.802227 4919 scope.go:117] "RemoveContainer" containerID="e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca" Jan 09 14:26:43 crc kubenswrapper[4919]: E0109 14:26:43.802572 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca\": container with ID starting with e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca not found: ID does not exist" containerID="e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.802607 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca"} err="failed to get container status \"e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca\": rpc error: code = NotFound desc = could not find container \"e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca\": container with ID starting with e312e32374e9778700d78e6fca0c1b0a65c04f26adf6689ad068ceb3f34d3bca not found: ID does not exist" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.802627 4919 scope.go:117] "RemoveContainer" containerID="edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf" Jan 09 14:26:43 crc kubenswrapper[4919]: E0109 14:26:43.802938 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf\": container with ID starting with edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf not found: ID does not exist" containerID="edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf" Jan 09 14:26:43 crc kubenswrapper[4919]: I0109 14:26:43.802962 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf"} err="failed to get container status \"edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf\": rpc error: code = NotFound desc = could not find container \"edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf\": container with ID starting with edd628981592ed06a76280b095ca37f51cb2a39f004e16b473d21ff7a15423cf not found: ID does not exist" Jan 09 14:26:44 crc kubenswrapper[4919]: I0109 14:26:44.762607 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d52c923f-728d-4428-994e-57409e6e7f83" path="/var/lib/kubelet/pods/d52c923f-728d-4428-994e-57409e6e7f83/volumes" Jan 09 14:27:21 crc kubenswrapper[4919]: I0109 14:27:21.246887 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:27:21 crc kubenswrapper[4919]: I0109 14:27:21.247366 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:27:51 crc kubenswrapper[4919]: I0109 14:27:51.247128 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:27:51 crc kubenswrapper[4919]: I0109 14:27:51.247743 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:28:21 crc kubenswrapper[4919]: I0109 14:28:21.246737 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:28:21 crc kubenswrapper[4919]: I0109 14:28:21.247271 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:28:21 crc kubenswrapper[4919]: I0109 14:28:21.247324 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 14:28:21 crc kubenswrapper[4919]: I0109 14:28:21.248140 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"683030f1ec48c150eb8d5ec15c7c88597fd2694a114d1658e9cac33a1c47d1d5"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 14:28:21 crc kubenswrapper[4919]: I0109 14:28:21.248193 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://683030f1ec48c150eb8d5ec15c7c88597fd2694a114d1658e9cac33a1c47d1d5" gracePeriod=600 Jan 09 14:28:21 crc kubenswrapper[4919]: I0109 14:28:21.573400 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="683030f1ec48c150eb8d5ec15c7c88597fd2694a114d1658e9cac33a1c47d1d5" exitCode=0 Jan 09 14:28:21 crc kubenswrapper[4919]: I0109 14:28:21.573480 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"683030f1ec48c150eb8d5ec15c7c88597fd2694a114d1658e9cac33a1c47d1d5"} Jan 09 14:28:21 crc kubenswrapper[4919]: I0109 14:28:21.573733 4919 scope.go:117] "RemoveContainer" containerID="c34e6f5119e50b5ceea7bd407492c9aa47955192912fb6c846845e2b4834c11f" Jan 09 14:28:22 crc kubenswrapper[4919]: I0109 14:28:22.585643 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e"} Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.710687 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmmp"] Jan 09 14:29:24 crc kubenswrapper[4919]: E0109 14:29:24.711698 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52c923f-728d-4428-994e-57409e6e7f83" containerName="extract-content" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.711714 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52c923f-728d-4428-994e-57409e6e7f83" containerName="extract-content" Jan 09 14:29:24 crc kubenswrapper[4919]: E0109 14:29:24.711751 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52c923f-728d-4428-994e-57409e6e7f83" containerName="registry-server" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.711759 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52c923f-728d-4428-994e-57409e6e7f83" containerName="registry-server" Jan 09 14:29:24 crc kubenswrapper[4919]: E0109 14:29:24.711797 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52c923f-728d-4428-994e-57409e6e7f83" containerName="extract-utilities" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.711806 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52c923f-728d-4428-994e-57409e6e7f83" containerName="extract-utilities" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.712037 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52c923f-728d-4428-994e-57409e6e7f83" containerName="registry-server" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.713814 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.721305 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmmp"] Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.818487 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-catalog-content\") pod \"redhat-marketplace-5xmmp\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.818569 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-utilities\") pod \"redhat-marketplace-5xmmp\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.819558 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfpbz\" (UniqueName: \"kubernetes.io/projected/452629fc-d914-4681-8706-b3f021ef25c7-kube-api-access-nfpbz\") pod \"redhat-marketplace-5xmmp\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.921563 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfpbz\" (UniqueName: \"kubernetes.io/projected/452629fc-d914-4681-8706-b3f021ef25c7-kube-api-access-nfpbz\") pod \"redhat-marketplace-5xmmp\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.921636 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-catalog-content\") pod \"redhat-marketplace-5xmmp\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.921680 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-utilities\") pod \"redhat-marketplace-5xmmp\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.922358 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-catalog-content\") pod \"redhat-marketplace-5xmmp\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.922474 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-utilities\") pod \"redhat-marketplace-5xmmp\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:24 crc kubenswrapper[4919]: I0109 14:29:24.942343 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfpbz\" (UniqueName: \"kubernetes.io/projected/452629fc-d914-4681-8706-b3f021ef25c7-kube-api-access-nfpbz\") pod \"redhat-marketplace-5xmmp\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:25 crc kubenswrapper[4919]: I0109 14:29:25.031645 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:25 crc kubenswrapper[4919]: I0109 14:29:25.492003 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmmp"] Jan 09 14:29:26 crc kubenswrapper[4919]: I0109 14:29:26.181905 4919 generic.go:334] "Generic (PLEG): container finished" podID="452629fc-d914-4681-8706-b3f021ef25c7" containerID="7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60" exitCode=0 Jan 09 14:29:26 crc kubenswrapper[4919]: I0109 14:29:26.182011 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmmp" event={"ID":"452629fc-d914-4681-8706-b3f021ef25c7","Type":"ContainerDied","Data":"7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60"} Jan 09 14:29:26 crc kubenswrapper[4919]: I0109 14:29:26.182272 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmmp" event={"ID":"452629fc-d914-4681-8706-b3f021ef25c7","Type":"ContainerStarted","Data":"d64ec581ceb702efa0e7410e7baa3d937b60843569a598a6d64e939797fa5b66"} Jan 09 14:29:27 crc kubenswrapper[4919]: I0109 14:29:27.193242 4919 generic.go:334] "Generic (PLEG): container finished" podID="452629fc-d914-4681-8706-b3f021ef25c7" containerID="e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e" exitCode=0 Jan 09 14:29:27 crc kubenswrapper[4919]: I0109 14:29:27.193415 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmmp" event={"ID":"452629fc-d914-4681-8706-b3f021ef25c7","Type":"ContainerDied","Data":"e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e"} Jan 09 14:29:28 crc kubenswrapper[4919]: I0109 14:29:28.208681 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmmp" event={"ID":"452629fc-d914-4681-8706-b3f021ef25c7","Type":"ContainerStarted","Data":"e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603"} Jan 09 14:29:28 crc kubenswrapper[4919]: I0109 14:29:28.235151 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5xmmp" podStartSLOduration=2.733534055 podStartE2EDuration="4.235107479s" podCreationTimestamp="2026-01-09 14:29:24 +0000 UTC" firstStartedPulling="2026-01-09 14:29:26.184660817 +0000 UTC m=+3545.732500267" lastFinishedPulling="2026-01-09 14:29:27.686234241 +0000 UTC m=+3547.234073691" observedRunningTime="2026-01-09 14:29:28.228004841 +0000 UTC m=+3547.775844301" watchObservedRunningTime="2026-01-09 14:29:28.235107479 +0000 UTC m=+3547.782946929" Jan 09 14:29:35 crc kubenswrapper[4919]: I0109 14:29:35.032014 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:35 crc kubenswrapper[4919]: I0109 14:29:35.032648 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:35 crc kubenswrapper[4919]: I0109 14:29:35.081049 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:35 crc kubenswrapper[4919]: I0109 14:29:35.305783 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:35 crc kubenswrapper[4919]: I0109 14:29:35.347846 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmmp"] Jan 09 14:29:37 crc kubenswrapper[4919]: I0109 14:29:37.282367 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5xmmp" podUID="452629fc-d914-4681-8706-b3f021ef25c7" containerName="registry-server" containerID="cri-o://e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603" gracePeriod=2 Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.266794 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.297027 4919 generic.go:334] "Generic (PLEG): container finished" podID="452629fc-d914-4681-8706-b3f021ef25c7" containerID="e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603" exitCode=0 Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.297087 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmmp" event={"ID":"452629fc-d914-4681-8706-b3f021ef25c7","Type":"ContainerDied","Data":"e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603"} Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.297112 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xmmp" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.297122 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmmp" event={"ID":"452629fc-d914-4681-8706-b3f021ef25c7","Type":"ContainerDied","Data":"d64ec581ceb702efa0e7410e7baa3d937b60843569a598a6d64e939797fa5b66"} Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.297142 4919 scope.go:117] "RemoveContainer" containerID="e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.320297 4919 scope.go:117] "RemoveContainer" containerID="e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.353480 4919 scope.go:117] "RemoveContainer" containerID="7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.384582 4919 scope.go:117] "RemoveContainer" containerID="e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603" Jan 09 14:29:38 crc kubenswrapper[4919]: E0109 14:29:38.385116 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603\": container with ID starting with e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603 not found: ID does not exist" containerID="e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.385176 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603"} err="failed to get container status \"e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603\": rpc error: code = NotFound desc = could not find container \"e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603\": container with ID starting with e6a71d9b717722ea96ca26645690c0b4c88211ceb759407dbfbc88151cd3e603 not found: ID does not exist" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.385225 4919 scope.go:117] "RemoveContainer" containerID="e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e" Jan 09 14:29:38 crc kubenswrapper[4919]: E0109 14:29:38.385656 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e\": container with ID starting with e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e not found: ID does not exist" containerID="e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.385691 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e"} err="failed to get container status \"e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e\": rpc error: code = NotFound desc = could not find container \"e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e\": container with ID starting with e81a1fc2db6156a7bfcc7545afc945113cc9e00fe707f93f3f732d18a6d4716e not found: ID does not exist" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.385716 4919 scope.go:117] "RemoveContainer" containerID="7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60" Jan 09 14:29:38 crc kubenswrapper[4919]: E0109 14:29:38.385944 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60\": container with ID starting with 7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60 not found: ID does not exist" containerID="7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.385966 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60"} err="failed to get container status \"7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60\": rpc error: code = NotFound desc = could not find container \"7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60\": container with ID starting with 7ba9b11c4c2dea686d05d67e87595d17002eed1dd644e88528da273b57961b60 not found: ID does not exist" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.455008 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-catalog-content\") pod \"452629fc-d914-4681-8706-b3f021ef25c7\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.455051 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-utilities\") pod \"452629fc-d914-4681-8706-b3f021ef25c7\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.455192 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfpbz\" (UniqueName: \"kubernetes.io/projected/452629fc-d914-4681-8706-b3f021ef25c7-kube-api-access-nfpbz\") pod \"452629fc-d914-4681-8706-b3f021ef25c7\" (UID: \"452629fc-d914-4681-8706-b3f021ef25c7\") " Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.457096 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-utilities" (OuterVolumeSpecName: "utilities") pod "452629fc-d914-4681-8706-b3f021ef25c7" (UID: "452629fc-d914-4681-8706-b3f021ef25c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.461075 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452629fc-d914-4681-8706-b3f021ef25c7-kube-api-access-nfpbz" (OuterVolumeSpecName: "kube-api-access-nfpbz") pod "452629fc-d914-4681-8706-b3f021ef25c7" (UID: "452629fc-d914-4681-8706-b3f021ef25c7"). InnerVolumeSpecName "kube-api-access-nfpbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.502046 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "452629fc-d914-4681-8706-b3f021ef25c7" (UID: "452629fc-d914-4681-8706-b3f021ef25c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.558264 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfpbz\" (UniqueName: \"kubernetes.io/projected/452629fc-d914-4681-8706-b3f021ef25c7-kube-api-access-nfpbz\") on node \"crc\" DevicePath \"\"" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.558304 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.558317 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452629fc-d914-4681-8706-b3f021ef25c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.633241 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmmp"] Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.642046 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmmp"] Jan 09 14:29:38 crc kubenswrapper[4919]: I0109 14:29:38.770089 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="452629fc-d914-4681-8706-b3f021ef25c7" path="/var/lib/kubelet/pods/452629fc-d914-4681-8706-b3f021ef25c7/volumes" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.150601 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w"] Jan 09 14:30:00 crc kubenswrapper[4919]: E0109 14:30:00.151648 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452629fc-d914-4681-8706-b3f021ef25c7" containerName="registry-server" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.151664 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="452629fc-d914-4681-8706-b3f021ef25c7" containerName="registry-server" Jan 09 14:30:00 crc kubenswrapper[4919]: E0109 14:30:00.151676 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452629fc-d914-4681-8706-b3f021ef25c7" containerName="extract-content" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.151685 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="452629fc-d914-4681-8706-b3f021ef25c7" containerName="extract-content" Jan 09 14:30:00 crc kubenswrapper[4919]: E0109 14:30:00.151701 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452629fc-d914-4681-8706-b3f021ef25c7" containerName="extract-utilities" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.151708 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="452629fc-d914-4681-8706-b3f021ef25c7" containerName="extract-utilities" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.152838 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="452629fc-d914-4681-8706-b3f021ef25c7" containerName="registry-server" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.153699 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.155834 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.157506 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.163978 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w"] Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.323079 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q427t\" (UniqueName: \"kubernetes.io/projected/fb5cf77c-1d7d-4c47-b93f-def89fda6156-kube-api-access-q427t\") pod \"collect-profiles-29466150-9dk5w\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.323156 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cf77c-1d7d-4c47-b93f-def89fda6156-config-volume\") pod \"collect-profiles-29466150-9dk5w\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.323256 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5cf77c-1d7d-4c47-b93f-def89fda6156-secret-volume\") pod \"collect-profiles-29466150-9dk5w\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.425084 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5cf77c-1d7d-4c47-b93f-def89fda6156-secret-volume\") pod \"collect-profiles-29466150-9dk5w\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.425982 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q427t\" (UniqueName: \"kubernetes.io/projected/fb5cf77c-1d7d-4c47-b93f-def89fda6156-kube-api-access-q427t\") pod \"collect-profiles-29466150-9dk5w\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.426143 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cf77c-1d7d-4c47-b93f-def89fda6156-config-volume\") pod \"collect-profiles-29466150-9dk5w\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.427377 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cf77c-1d7d-4c47-b93f-def89fda6156-config-volume\") pod \"collect-profiles-29466150-9dk5w\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.433282 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5cf77c-1d7d-4c47-b93f-def89fda6156-secret-volume\") pod \"collect-profiles-29466150-9dk5w\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.443059 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q427t\" (UniqueName: \"kubernetes.io/projected/fb5cf77c-1d7d-4c47-b93f-def89fda6156-kube-api-access-q427t\") pod \"collect-profiles-29466150-9dk5w\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.487101 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:00 crc kubenswrapper[4919]: I0109 14:30:00.917256 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w"] Jan 09 14:30:01 crc kubenswrapper[4919]: I0109 14:30:01.504602 4919 generic.go:334] "Generic (PLEG): container finished" podID="fb5cf77c-1d7d-4c47-b93f-def89fda6156" containerID="a0cc3966ecbc92ac7f95d2c4453b774d12dfcd4c89ea02536cfe9fe46c7e8dbc" exitCode=0 Jan 09 14:30:01 crc kubenswrapper[4919]: I0109 14:30:01.504705 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" event={"ID":"fb5cf77c-1d7d-4c47-b93f-def89fda6156","Type":"ContainerDied","Data":"a0cc3966ecbc92ac7f95d2c4453b774d12dfcd4c89ea02536cfe9fe46c7e8dbc"} Jan 09 14:30:01 crc kubenswrapper[4919]: I0109 14:30:01.504889 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" event={"ID":"fb5cf77c-1d7d-4c47-b93f-def89fda6156","Type":"ContainerStarted","Data":"a481d7962fa14540a2069e353f4793cac01ad452c228ed4dfebe9666411450f9"} Jan 09 14:30:02 crc kubenswrapper[4919]: I0109 14:30:02.893849 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.075699 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cf77c-1d7d-4c47-b93f-def89fda6156-config-volume\") pod \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.075930 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5cf77c-1d7d-4c47-b93f-def89fda6156-secret-volume\") pod \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.076028 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q427t\" (UniqueName: \"kubernetes.io/projected/fb5cf77c-1d7d-4c47-b93f-def89fda6156-kube-api-access-q427t\") pod \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\" (UID: \"fb5cf77c-1d7d-4c47-b93f-def89fda6156\") " Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.076844 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb5cf77c-1d7d-4c47-b93f-def89fda6156-config-volume" (OuterVolumeSpecName: "config-volume") pod "fb5cf77c-1d7d-4c47-b93f-def89fda6156" (UID: "fb5cf77c-1d7d-4c47-b93f-def89fda6156"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.084874 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5cf77c-1d7d-4c47-b93f-def89fda6156-kube-api-access-q427t" (OuterVolumeSpecName: "kube-api-access-q427t") pod "fb5cf77c-1d7d-4c47-b93f-def89fda6156" (UID: "fb5cf77c-1d7d-4c47-b93f-def89fda6156"). InnerVolumeSpecName "kube-api-access-q427t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.085411 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5cf77c-1d7d-4c47-b93f-def89fda6156-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fb5cf77c-1d7d-4c47-b93f-def89fda6156" (UID: "fb5cf77c-1d7d-4c47-b93f-def89fda6156"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.178106 4919 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5cf77c-1d7d-4c47-b93f-def89fda6156-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.178156 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q427t\" (UniqueName: \"kubernetes.io/projected/fb5cf77c-1d7d-4c47-b93f-def89fda6156-kube-api-access-q427t\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.178167 4919 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5cf77c-1d7d-4c47-b93f-def89fda6156-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.524563 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" event={"ID":"fb5cf77c-1d7d-4c47-b93f-def89fda6156","Type":"ContainerDied","Data":"a481d7962fa14540a2069e353f4793cac01ad452c228ed4dfebe9666411450f9"} Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.524627 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a481d7962fa14540a2069e353f4793cac01ad452c228ed4dfebe9666411450f9" Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.524652 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466150-9dk5w" Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.960992 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5"] Jan 09 14:30:03 crc kubenswrapper[4919]: I0109 14:30:03.968413 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466105-fr8d5"] Jan 09 14:30:04 crc kubenswrapper[4919]: I0109 14:30:04.764270 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="305e4023-ac44-4a22-ba43-2a2f67441647" path="/var/lib/kubelet/pods/305e4023-ac44-4a22-ba43-2a2f67441647/volumes" Jan 09 14:30:21 crc kubenswrapper[4919]: I0109 14:30:21.247169 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:30:21 crc kubenswrapper[4919]: I0109 14:30:21.247685 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:30:29 crc kubenswrapper[4919]: I0109 14:30:29.743416 4919 generic.go:334] "Generic (PLEG): container finished" podID="f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" containerID="a1f043d3924c4d664d9d1b19bf6f24f0efe9ffcaebe14d6c0e010352697070eb" exitCode=0 Jan 09 14:30:29 crc kubenswrapper[4919]: I0109 14:30:29.743500 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea","Type":"ContainerDied","Data":"a1f043d3924c4d664d9d1b19bf6f24f0efe9ffcaebe14d6c0e010352697070eb"} Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.165333 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.253644 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.253879 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-temporary\") pod \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.253927 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-config-data\") pod \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.254005 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-workdir\") pod \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.254056 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ca-certs\") pod \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.254082 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config\") pod \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.254124 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ssh-key\") pod \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.254151 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s9dv\" (UniqueName: \"kubernetes.io/projected/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-kube-api-access-7s9dv\") pod \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.254404 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config-secret\") pod \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\" (UID: \"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea\") " Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.255176 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" (UID: "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.256652 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-config-data" (OuterVolumeSpecName: "config-data") pod "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" (UID: "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.260977 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" (UID: "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.263476 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" (UID: "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.266631 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-kube-api-access-7s9dv" (OuterVolumeSpecName: "kube-api-access-7s9dv") pod "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" (UID: "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea"). InnerVolumeSpecName "kube-api-access-7s9dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.286343 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" (UID: "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.286798 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" (UID: "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.302563 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" (UID: "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.323382 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" (UID: "f53c17d7-be4d-4bcf-aea4-2617abf3d9ea"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.357669 4919 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.357723 4919 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.357740 4919 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.357755 4919 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.357770 4919 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.357783 4919 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.357796 4919 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.357807 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s9dv\" (UniqueName: \"kubernetes.io/projected/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-kube-api-access-7s9dv\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.357819 4919 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f53c17d7-be4d-4bcf-aea4-2617abf3d9ea-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.383309 4919 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.459908 4919 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.760344 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f53c17d7-be4d-4bcf-aea4-2617abf3d9ea","Type":"ContainerDied","Data":"be034418b63537d7bb6e696310968d77adb0185560cbe56673a679bb8f205600"} Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.760394 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be034418b63537d7bb6e696310968d77adb0185560cbe56673a679bb8f205600" Jan 09 14:30:31 crc kubenswrapper[4919]: I0109 14:30:31.760408 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.237731 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 09 14:30:41 crc kubenswrapper[4919]: E0109 14:30:41.238703 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" containerName="tempest-tests-tempest-tests-runner" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.238716 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" containerName="tempest-tests-tempest-tests-runner" Jan 09 14:30:41 crc kubenswrapper[4919]: E0109 14:30:41.238732 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5cf77c-1d7d-4c47-b93f-def89fda6156" containerName="collect-profiles" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.238740 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5cf77c-1d7d-4c47-b93f-def89fda6156" containerName="collect-profiles" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.238928 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb5cf77c-1d7d-4c47-b93f-def89fda6156" containerName="collect-profiles" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.238956 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f53c17d7-be4d-4bcf-aea4-2617abf3d9ea" containerName="tempest-tests-tempest-tests-runner" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.240264 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.242415 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j6dqk" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.262497 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.440680 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thnzm\" (UniqueName: \"kubernetes.io/projected/77aedacd-c1c9-4ee5-836d-69b929d4f842-kube-api-access-thnzm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"77aedacd-c1c9-4ee5-836d-69b929d4f842\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.441311 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"77aedacd-c1c9-4ee5-836d-69b929d4f842\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.543607 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"77aedacd-c1c9-4ee5-836d-69b929d4f842\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.543864 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thnzm\" (UniqueName: \"kubernetes.io/projected/77aedacd-c1c9-4ee5-836d-69b929d4f842-kube-api-access-thnzm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"77aedacd-c1c9-4ee5-836d-69b929d4f842\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.544161 4919 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"77aedacd-c1c9-4ee5-836d-69b929d4f842\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.564823 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thnzm\" (UniqueName: \"kubernetes.io/projected/77aedacd-c1c9-4ee5-836d-69b929d4f842-kube-api-access-thnzm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"77aedacd-c1c9-4ee5-836d-69b929d4f842\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.568440 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"77aedacd-c1c9-4ee5-836d-69b929d4f842\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 14:30:41 crc kubenswrapper[4919]: I0109 14:30:41.866415 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 14:30:42 crc kubenswrapper[4919]: I0109 14:30:42.374335 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 09 14:30:42 crc kubenswrapper[4919]: I0109 14:30:42.379370 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 14:30:42 crc kubenswrapper[4919]: I0109 14:30:42.860800 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"77aedacd-c1c9-4ee5-836d-69b929d4f842","Type":"ContainerStarted","Data":"c9fb8de7b71570a7d9ed88cf29954bebdd96d7adbf11731b9d1dd46307c802b3"} Jan 09 14:30:43 crc kubenswrapper[4919]: I0109 14:30:43.873343 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"77aedacd-c1c9-4ee5-836d-69b929d4f842","Type":"ContainerStarted","Data":"7ee96901488550bf7dba00a57a14da1d155be3474b2f954f483a3261c53af235"} Jan 09 14:30:43 crc kubenswrapper[4919]: I0109 14:30:43.889469 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.990043913 podStartE2EDuration="2.889449599s" podCreationTimestamp="2026-01-09 14:30:41 +0000 UTC" firstStartedPulling="2026-01-09 14:30:42.379177058 +0000 UTC m=+3621.927016508" lastFinishedPulling="2026-01-09 14:30:43.278582744 +0000 UTC m=+3622.826422194" observedRunningTime="2026-01-09 14:30:43.886846504 +0000 UTC m=+3623.434685954" watchObservedRunningTime="2026-01-09 14:30:43.889449599 +0000 UTC m=+3623.437289049" Jan 09 14:30:44 crc kubenswrapper[4919]: I0109 14:30:44.181131 4919 scope.go:117] "RemoveContainer" containerID="6f83f4828d0f79a2651d085732b4b5f0608bf0228c77173f6cac6cf323e4a36e" Jan 09 14:30:51 crc kubenswrapper[4919]: I0109 14:30:51.247180 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:30:51 crc kubenswrapper[4919]: I0109 14:30:51.247739 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:31:21 crc kubenswrapper[4919]: I0109 14:31:21.247529 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:31:21 crc kubenswrapper[4919]: I0109 14:31:21.248049 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:31:21 crc kubenswrapper[4919]: I0109 14:31:21.248091 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 14:31:21 crc kubenswrapper[4919]: I0109 14:31:21.248901 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 14:31:21 crc kubenswrapper[4919]: I0109 14:31:21.248956 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" gracePeriod=600 Jan 09 14:31:21 crc kubenswrapper[4919]: E0109 14:31:21.374415 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:31:22 crc kubenswrapper[4919]: I0109 14:31:22.244263 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" exitCode=0 Jan 09 14:31:22 crc kubenswrapper[4919]: I0109 14:31:22.244332 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e"} Jan 09 14:31:22 crc kubenswrapper[4919]: I0109 14:31:22.244628 4919 scope.go:117] "RemoveContainer" containerID="683030f1ec48c150eb8d5ec15c7c88597fd2694a114d1658e9cac33a1c47d1d5" Jan 09 14:31:22 crc kubenswrapper[4919]: I0109 14:31:22.245416 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:31:22 crc kubenswrapper[4919]: E0109 14:31:22.245724 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:31:36 crc kubenswrapper[4919]: I0109 14:31:36.752861 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:31:36 crc kubenswrapper[4919]: E0109 14:31:36.754127 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:31:48 crc kubenswrapper[4919]: I0109 14:31:48.752365 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:31:48 crc kubenswrapper[4919]: E0109 14:31:48.753166 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.345759 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4cczb/must-gather-q7b49"] Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.347685 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/must-gather-q7b49" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.350035 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4cczb"/"openshift-service-ca.crt" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.350898 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4cczb"/"default-dockercfg-wtf5g" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.350910 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4cczb"/"kube-root-ca.crt" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.368533 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4cczb/must-gather-q7b49"] Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.486299 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8c6\" (UniqueName: \"kubernetes.io/projected/0004f8c6-daac-4060-9f51-eadc76d135ec-kube-api-access-zm8c6\") pod \"must-gather-q7b49\" (UID: \"0004f8c6-daac-4060-9f51-eadc76d135ec\") " pod="openshift-must-gather-4cczb/must-gather-q7b49" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.486377 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0004f8c6-daac-4060-9f51-eadc76d135ec-must-gather-output\") pod \"must-gather-q7b49\" (UID: \"0004f8c6-daac-4060-9f51-eadc76d135ec\") " pod="openshift-must-gather-4cczb/must-gather-q7b49" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.588035 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8c6\" (UniqueName: \"kubernetes.io/projected/0004f8c6-daac-4060-9f51-eadc76d135ec-kube-api-access-zm8c6\") pod \"must-gather-q7b49\" (UID: \"0004f8c6-daac-4060-9f51-eadc76d135ec\") " pod="openshift-must-gather-4cczb/must-gather-q7b49" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.588143 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0004f8c6-daac-4060-9f51-eadc76d135ec-must-gather-output\") pod \"must-gather-q7b49\" (UID: \"0004f8c6-daac-4060-9f51-eadc76d135ec\") " pod="openshift-must-gather-4cczb/must-gather-q7b49" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.588658 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0004f8c6-daac-4060-9f51-eadc76d135ec-must-gather-output\") pod \"must-gather-q7b49\" (UID: \"0004f8c6-daac-4060-9f51-eadc76d135ec\") " pod="openshift-must-gather-4cczb/must-gather-q7b49" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.608273 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8c6\" (UniqueName: \"kubernetes.io/projected/0004f8c6-daac-4060-9f51-eadc76d135ec-kube-api-access-zm8c6\") pod \"must-gather-q7b49\" (UID: \"0004f8c6-daac-4060-9f51-eadc76d135ec\") " pod="openshift-must-gather-4cczb/must-gather-q7b49" Jan 09 14:31:56 crc kubenswrapper[4919]: I0109 14:31:56.669706 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/must-gather-q7b49" Jan 09 14:31:57 crc kubenswrapper[4919]: I0109 14:31:57.121948 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4cczb/must-gather-q7b49"] Jan 09 14:31:57 crc kubenswrapper[4919]: I0109 14:31:57.842723 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/must-gather-q7b49" event={"ID":"0004f8c6-daac-4060-9f51-eadc76d135ec","Type":"ContainerStarted","Data":"0abfc3a99ed53f1e48f52123e14eb5bedd5c6ca6410723c7d8023a14591af295"} Jan 09 14:32:01 crc kubenswrapper[4919]: I0109 14:32:01.751825 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:32:01 crc kubenswrapper[4919]: E0109 14:32:01.752718 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:32:06 crc kubenswrapper[4919]: I0109 14:32:06.948876 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/must-gather-q7b49" event={"ID":"0004f8c6-daac-4060-9f51-eadc76d135ec","Type":"ContainerStarted","Data":"fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87"} Jan 09 14:32:06 crc kubenswrapper[4919]: I0109 14:32:06.949398 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/must-gather-q7b49" event={"ID":"0004f8c6-daac-4060-9f51-eadc76d135ec","Type":"ContainerStarted","Data":"411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc"} Jan 09 14:32:06 crc kubenswrapper[4919]: I0109 14:32:06.970164 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4cczb/must-gather-q7b49" podStartSLOduration=2.41277492 podStartE2EDuration="10.970137277s" podCreationTimestamp="2026-01-09 14:31:56 +0000 UTC" firstStartedPulling="2026-01-09 14:31:57.132953232 +0000 UTC m=+3696.680792682" lastFinishedPulling="2026-01-09 14:32:05.690315589 +0000 UTC m=+3705.238155039" observedRunningTime="2026-01-09 14:32:06.962809244 +0000 UTC m=+3706.510648704" watchObservedRunningTime="2026-01-09 14:32:06.970137277 +0000 UTC m=+3706.517976727" Jan 09 14:32:10 crc kubenswrapper[4919]: I0109 14:32:10.465846 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4cczb/crc-debug-q6445"] Jan 09 14:32:10 crc kubenswrapper[4919]: I0109 14:32:10.467569 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-q6445" Jan 09 14:32:10 crc kubenswrapper[4919]: I0109 14:32:10.602535 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tv4j\" (UniqueName: \"kubernetes.io/projected/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-kube-api-access-8tv4j\") pod \"crc-debug-q6445\" (UID: \"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e\") " pod="openshift-must-gather-4cczb/crc-debug-q6445" Jan 09 14:32:10 crc kubenswrapper[4919]: I0109 14:32:10.602616 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-host\") pod \"crc-debug-q6445\" (UID: \"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e\") " pod="openshift-must-gather-4cczb/crc-debug-q6445" Jan 09 14:32:10 crc kubenswrapper[4919]: I0109 14:32:10.704480 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tv4j\" (UniqueName: \"kubernetes.io/projected/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-kube-api-access-8tv4j\") pod \"crc-debug-q6445\" (UID: \"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e\") " pod="openshift-must-gather-4cczb/crc-debug-q6445" Jan 09 14:32:10 crc kubenswrapper[4919]: I0109 14:32:10.704536 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-host\") pod \"crc-debug-q6445\" (UID: \"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e\") " pod="openshift-must-gather-4cczb/crc-debug-q6445" Jan 09 14:32:10 crc kubenswrapper[4919]: I0109 14:32:10.704707 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-host\") pod \"crc-debug-q6445\" (UID: \"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e\") " pod="openshift-must-gather-4cczb/crc-debug-q6445" Jan 09 14:32:10 crc kubenswrapper[4919]: I0109 14:32:10.724017 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tv4j\" (UniqueName: \"kubernetes.io/projected/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-kube-api-access-8tv4j\") pod \"crc-debug-q6445\" (UID: \"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e\") " pod="openshift-must-gather-4cczb/crc-debug-q6445" Jan 09 14:32:10 crc kubenswrapper[4919]: I0109 14:32:10.790304 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-q6445" Jan 09 14:32:10 crc kubenswrapper[4919]: W0109 14:32:10.833357 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd2defc6_d7bf_4131_8a1c_8f27ca135d0e.slice/crio-a0d4b035132557e89bd9cc3fcbd6515f0388d241496f0050197e4af0b1ea9cc2 WatchSource:0}: Error finding container a0d4b035132557e89bd9cc3fcbd6515f0388d241496f0050197e4af0b1ea9cc2: Status 404 returned error can't find the container with id a0d4b035132557e89bd9cc3fcbd6515f0388d241496f0050197e4af0b1ea9cc2 Jan 09 14:32:10 crc kubenswrapper[4919]: I0109 14:32:10.986414 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/crc-debug-q6445" event={"ID":"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e","Type":"ContainerStarted","Data":"a0d4b035132557e89bd9cc3fcbd6515f0388d241496f0050197e4af0b1ea9cc2"} Jan 09 14:32:16 crc kubenswrapper[4919]: I0109 14:32:16.751699 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:32:16 crc kubenswrapper[4919]: E0109 14:32:16.752557 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:32:23 crc kubenswrapper[4919]: I0109 14:32:23.102751 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/crc-debug-q6445" event={"ID":"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e","Type":"ContainerStarted","Data":"7c2653584bcb8239e71f15a2bb45daeb399376fec3a44a637f0e8e6a51677c41"} Jan 09 14:32:23 crc kubenswrapper[4919]: I0109 14:32:23.129540 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4cczb/crc-debug-q6445" podStartSLOduration=2.089963827 podStartE2EDuration="13.129518138s" podCreationTimestamp="2026-01-09 14:32:10 +0000 UTC" firstStartedPulling="2026-01-09 14:32:10.835949721 +0000 UTC m=+3710.383789171" lastFinishedPulling="2026-01-09 14:32:21.875504032 +0000 UTC m=+3721.423343482" observedRunningTime="2026-01-09 14:32:23.12640974 +0000 UTC m=+3722.674249210" watchObservedRunningTime="2026-01-09 14:32:23.129518138 +0000 UTC m=+3722.677357588" Jan 09 14:32:29 crc kubenswrapper[4919]: I0109 14:32:29.751788 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:32:29 crc kubenswrapper[4919]: E0109 14:32:29.752560 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:32:43 crc kubenswrapper[4919]: I0109 14:32:43.751906 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:32:43 crc kubenswrapper[4919]: E0109 14:32:43.752864 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:32:54 crc kubenswrapper[4919]: I0109 14:32:54.752170 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:32:54 crc kubenswrapper[4919]: E0109 14:32:54.752991 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:33:02 crc kubenswrapper[4919]: I0109 14:33:02.471153 4919 generic.go:334] "Generic (PLEG): container finished" podID="bd2defc6-d7bf-4131-8a1c-8f27ca135d0e" containerID="7c2653584bcb8239e71f15a2bb45daeb399376fec3a44a637f0e8e6a51677c41" exitCode=0 Jan 09 14:33:02 crc kubenswrapper[4919]: I0109 14:33:02.473079 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/crc-debug-q6445" event={"ID":"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e","Type":"ContainerDied","Data":"7c2653584bcb8239e71f15a2bb45daeb399376fec3a44a637f0e8e6a51677c41"} Jan 09 14:33:03 crc kubenswrapper[4919]: I0109 14:33:03.591896 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-q6445" Jan 09 14:33:03 crc kubenswrapper[4919]: I0109 14:33:03.634599 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4cczb/crc-debug-q6445"] Jan 09 14:33:03 crc kubenswrapper[4919]: I0109 14:33:03.643292 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4cczb/crc-debug-q6445"] Jan 09 14:33:03 crc kubenswrapper[4919]: I0109 14:33:03.691320 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-host\") pod \"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e\" (UID: \"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e\") " Jan 09 14:33:03 crc kubenswrapper[4919]: I0109 14:33:03.691446 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tv4j\" (UniqueName: \"kubernetes.io/projected/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-kube-api-access-8tv4j\") pod \"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e\" (UID: \"bd2defc6-d7bf-4131-8a1c-8f27ca135d0e\") " Jan 09 14:33:03 crc kubenswrapper[4919]: I0109 14:33:03.691498 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-host" (OuterVolumeSpecName: "host") pod "bd2defc6-d7bf-4131-8a1c-8f27ca135d0e" (UID: "bd2defc6-d7bf-4131-8a1c-8f27ca135d0e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 14:33:03 crc kubenswrapper[4919]: I0109 14:33:03.691877 4919 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-host\") on node \"crc\" DevicePath \"\"" Jan 09 14:33:03 crc kubenswrapper[4919]: I0109 14:33:03.697099 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-kube-api-access-8tv4j" (OuterVolumeSpecName: "kube-api-access-8tv4j") pod "bd2defc6-d7bf-4131-8a1c-8f27ca135d0e" (UID: "bd2defc6-d7bf-4131-8a1c-8f27ca135d0e"). InnerVolumeSpecName "kube-api-access-8tv4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:33:03 crc kubenswrapper[4919]: I0109 14:33:03.794639 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tv4j\" (UniqueName: \"kubernetes.io/projected/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e-kube-api-access-8tv4j\") on node \"crc\" DevicePath \"\"" Jan 09 14:33:04 crc kubenswrapper[4919]: I0109 14:33:04.488935 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0d4b035132557e89bd9cc3fcbd6515f0388d241496f0050197e4af0b1ea9cc2" Jan 09 14:33:04 crc kubenswrapper[4919]: I0109 14:33:04.489020 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-q6445" Jan 09 14:33:04 crc kubenswrapper[4919]: I0109 14:33:04.763775 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd2defc6-d7bf-4131-8a1c-8f27ca135d0e" path="/var/lib/kubelet/pods/bd2defc6-d7bf-4131-8a1c-8f27ca135d0e/volumes" Jan 09 14:33:04 crc kubenswrapper[4919]: I0109 14:33:04.826872 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4cczb/crc-debug-7n6f2"] Jan 09 14:33:04 crc kubenswrapper[4919]: E0109 14:33:04.827756 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2defc6-d7bf-4131-8a1c-8f27ca135d0e" containerName="container-00" Jan 09 14:33:04 crc kubenswrapper[4919]: I0109 14:33:04.827778 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2defc6-d7bf-4131-8a1c-8f27ca135d0e" containerName="container-00" Jan 09 14:33:04 crc kubenswrapper[4919]: I0109 14:33:04.828000 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd2defc6-d7bf-4131-8a1c-8f27ca135d0e" containerName="container-00" Jan 09 14:33:04 crc kubenswrapper[4919]: I0109 14:33:04.828645 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-7n6f2" Jan 09 14:33:04 crc kubenswrapper[4919]: I0109 14:33:04.914004 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjhbh\" (UniqueName: \"kubernetes.io/projected/6bdf7a81-d176-4088-8367-fb233af949b4-kube-api-access-gjhbh\") pod \"crc-debug-7n6f2\" (UID: \"6bdf7a81-d176-4088-8367-fb233af949b4\") " pod="openshift-must-gather-4cczb/crc-debug-7n6f2" Jan 09 14:33:04 crc kubenswrapper[4919]: I0109 14:33:04.914653 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bdf7a81-d176-4088-8367-fb233af949b4-host\") pod \"crc-debug-7n6f2\" (UID: \"6bdf7a81-d176-4088-8367-fb233af949b4\") " pod="openshift-must-gather-4cczb/crc-debug-7n6f2" Jan 09 14:33:05 crc kubenswrapper[4919]: I0109 14:33:05.016371 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjhbh\" (UniqueName: \"kubernetes.io/projected/6bdf7a81-d176-4088-8367-fb233af949b4-kube-api-access-gjhbh\") pod \"crc-debug-7n6f2\" (UID: \"6bdf7a81-d176-4088-8367-fb233af949b4\") " pod="openshift-must-gather-4cczb/crc-debug-7n6f2" Jan 09 14:33:05 crc kubenswrapper[4919]: I0109 14:33:05.016509 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bdf7a81-d176-4088-8367-fb233af949b4-host\") pod \"crc-debug-7n6f2\" (UID: \"6bdf7a81-d176-4088-8367-fb233af949b4\") " pod="openshift-must-gather-4cczb/crc-debug-7n6f2" Jan 09 14:33:05 crc kubenswrapper[4919]: I0109 14:33:05.016661 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bdf7a81-d176-4088-8367-fb233af949b4-host\") pod \"crc-debug-7n6f2\" (UID: \"6bdf7a81-d176-4088-8367-fb233af949b4\") " pod="openshift-must-gather-4cczb/crc-debug-7n6f2" Jan 09 14:33:05 crc kubenswrapper[4919]: I0109 14:33:05.058636 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjhbh\" (UniqueName: \"kubernetes.io/projected/6bdf7a81-d176-4088-8367-fb233af949b4-kube-api-access-gjhbh\") pod \"crc-debug-7n6f2\" (UID: \"6bdf7a81-d176-4088-8367-fb233af949b4\") " pod="openshift-must-gather-4cczb/crc-debug-7n6f2" Jan 09 14:33:05 crc kubenswrapper[4919]: I0109 14:33:05.146324 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-7n6f2" Jan 09 14:33:05 crc kubenswrapper[4919]: I0109 14:33:05.514282 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/crc-debug-7n6f2" event={"ID":"6bdf7a81-d176-4088-8367-fb233af949b4","Type":"ContainerStarted","Data":"00ee8b78ae2f8b7050eae2a9423e4898d98573e78762e6ed18ceb44684d53596"} Jan 09 14:33:05 crc kubenswrapper[4919]: I0109 14:33:05.514590 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/crc-debug-7n6f2" event={"ID":"6bdf7a81-d176-4088-8367-fb233af949b4","Type":"ContainerStarted","Data":"f1926fc53888b6175e401f8c165eddd4643487bc1ee207a044fe08759e030c52"} Jan 09 14:33:05 crc kubenswrapper[4919]: I0109 14:33:05.935823 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4cczb/crc-debug-7n6f2"] Jan 09 14:33:05 crc kubenswrapper[4919]: I0109 14:33:05.944075 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4cczb/crc-debug-7n6f2"] Jan 09 14:33:06 crc kubenswrapper[4919]: I0109 14:33:06.523726 4919 generic.go:334] "Generic (PLEG): container finished" podID="6bdf7a81-d176-4088-8367-fb233af949b4" containerID="00ee8b78ae2f8b7050eae2a9423e4898d98573e78762e6ed18ceb44684d53596" exitCode=0 Jan 09 14:33:06 crc kubenswrapper[4919]: I0109 14:33:06.631441 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-7n6f2" Jan 09 14:33:06 crc kubenswrapper[4919]: I0109 14:33:06.745051 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjhbh\" (UniqueName: \"kubernetes.io/projected/6bdf7a81-d176-4088-8367-fb233af949b4-kube-api-access-gjhbh\") pod \"6bdf7a81-d176-4088-8367-fb233af949b4\" (UID: \"6bdf7a81-d176-4088-8367-fb233af949b4\") " Jan 09 14:33:06 crc kubenswrapper[4919]: I0109 14:33:06.745229 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bdf7a81-d176-4088-8367-fb233af949b4-host\") pod \"6bdf7a81-d176-4088-8367-fb233af949b4\" (UID: \"6bdf7a81-d176-4088-8367-fb233af949b4\") " Jan 09 14:33:06 crc kubenswrapper[4919]: I0109 14:33:06.745387 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bdf7a81-d176-4088-8367-fb233af949b4-host" (OuterVolumeSpecName: "host") pod "6bdf7a81-d176-4088-8367-fb233af949b4" (UID: "6bdf7a81-d176-4088-8367-fb233af949b4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 14:33:06 crc kubenswrapper[4919]: I0109 14:33:06.745702 4919 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bdf7a81-d176-4088-8367-fb233af949b4-host\") on node \"crc\" DevicePath \"\"" Jan 09 14:33:06 crc kubenswrapper[4919]: I0109 14:33:06.751791 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bdf7a81-d176-4088-8367-fb233af949b4-kube-api-access-gjhbh" (OuterVolumeSpecName: "kube-api-access-gjhbh") pod "6bdf7a81-d176-4088-8367-fb233af949b4" (UID: "6bdf7a81-d176-4088-8367-fb233af949b4"). InnerVolumeSpecName "kube-api-access-gjhbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:33:06 crc kubenswrapper[4919]: I0109 14:33:06.766451 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bdf7a81-d176-4088-8367-fb233af949b4" path="/var/lib/kubelet/pods/6bdf7a81-d176-4088-8367-fb233af949b4/volumes" Jan 09 14:33:06 crc kubenswrapper[4919]: I0109 14:33:06.847729 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjhbh\" (UniqueName: \"kubernetes.io/projected/6bdf7a81-d176-4088-8367-fb233af949b4-kube-api-access-gjhbh\") on node \"crc\" DevicePath \"\"" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.136641 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4cczb/crc-debug-2qxvw"] Jan 09 14:33:07 crc kubenswrapper[4919]: E0109 14:33:07.137065 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bdf7a81-d176-4088-8367-fb233af949b4" containerName="container-00" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.137080 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdf7a81-d176-4088-8367-fb233af949b4" containerName="container-00" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.137290 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bdf7a81-d176-4088-8367-fb233af949b4" containerName="container-00" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.138068 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-2qxvw" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.254464 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689n6\" (UniqueName: \"kubernetes.io/projected/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-kube-api-access-689n6\") pod \"crc-debug-2qxvw\" (UID: \"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e\") " pod="openshift-must-gather-4cczb/crc-debug-2qxvw" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.254925 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-host\") pod \"crc-debug-2qxvw\" (UID: \"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e\") " pod="openshift-must-gather-4cczb/crc-debug-2qxvw" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.365646 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-host\") pod \"crc-debug-2qxvw\" (UID: \"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e\") " pod="openshift-must-gather-4cczb/crc-debug-2qxvw" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.365798 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-host\") pod \"crc-debug-2qxvw\" (UID: \"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e\") " pod="openshift-must-gather-4cczb/crc-debug-2qxvw" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.365834 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-689n6\" (UniqueName: \"kubernetes.io/projected/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-kube-api-access-689n6\") pod \"crc-debug-2qxvw\" (UID: \"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e\") " pod="openshift-must-gather-4cczb/crc-debug-2qxvw" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.393627 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-689n6\" (UniqueName: \"kubernetes.io/projected/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-kube-api-access-689n6\") pod \"crc-debug-2qxvw\" (UID: \"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e\") " pod="openshift-must-gather-4cczb/crc-debug-2qxvw" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.454248 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-2qxvw" Jan 09 14:33:07 crc kubenswrapper[4919]: W0109 14:33:07.490164 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod401ec682_23bc_4eb9_b0d4_0a9e59a8c28e.slice/crio-42695a97c06fa12a6deddca52381e74b75e456a82a5c9bc1b07ec8f2f7ba1f9e WatchSource:0}: Error finding container 42695a97c06fa12a6deddca52381e74b75e456a82a5c9bc1b07ec8f2f7ba1f9e: Status 404 returned error can't find the container with id 42695a97c06fa12a6deddca52381e74b75e456a82a5c9bc1b07ec8f2f7ba1f9e Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.535460 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/crc-debug-2qxvw" event={"ID":"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e","Type":"ContainerStarted","Data":"42695a97c06fa12a6deddca52381e74b75e456a82a5c9bc1b07ec8f2f7ba1f9e"} Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.537091 4919 scope.go:117] "RemoveContainer" containerID="00ee8b78ae2f8b7050eae2a9423e4898d98573e78762e6ed18ceb44684d53596" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.537357 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-7n6f2" Jan 09 14:33:07 crc kubenswrapper[4919]: I0109 14:33:07.751428 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:33:07 crc kubenswrapper[4919]: E0109 14:33:07.751703 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:33:08 crc kubenswrapper[4919]: I0109 14:33:08.550704 4919 generic.go:334] "Generic (PLEG): container finished" podID="401ec682-23bc-4eb9-b0d4-0a9e59a8c28e" containerID="608955673d36255a58bec2521cf85dce82bf54f66b811257df0060ed74272e08" exitCode=0 Jan 09 14:33:08 crc kubenswrapper[4919]: I0109 14:33:08.550784 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/crc-debug-2qxvw" event={"ID":"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e","Type":"ContainerDied","Data":"608955673d36255a58bec2521cf85dce82bf54f66b811257df0060ed74272e08"} Jan 09 14:33:08 crc kubenswrapper[4919]: I0109 14:33:08.587376 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4cczb/crc-debug-2qxvw"] Jan 09 14:33:08 crc kubenswrapper[4919]: I0109 14:33:08.598604 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4cczb/crc-debug-2qxvw"] Jan 09 14:33:09 crc kubenswrapper[4919]: I0109 14:33:09.705138 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-2qxvw" Jan 09 14:33:09 crc kubenswrapper[4919]: I0109 14:33:09.812923 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-689n6\" (UniqueName: \"kubernetes.io/projected/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-kube-api-access-689n6\") pod \"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e\" (UID: \"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e\") " Jan 09 14:33:09 crc kubenswrapper[4919]: I0109 14:33:09.813137 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-host\") pod \"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e\" (UID: \"401ec682-23bc-4eb9-b0d4-0a9e59a8c28e\") " Jan 09 14:33:09 crc kubenswrapper[4919]: I0109 14:33:09.813290 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-host" (OuterVolumeSpecName: "host") pod "401ec682-23bc-4eb9-b0d4-0a9e59a8c28e" (UID: "401ec682-23bc-4eb9-b0d4-0a9e59a8c28e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 14:33:09 crc kubenswrapper[4919]: I0109 14:33:09.813830 4919 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-host\") on node \"crc\" DevicePath \"\"" Jan 09 14:33:09 crc kubenswrapper[4919]: I0109 14:33:09.818784 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-kube-api-access-689n6" (OuterVolumeSpecName: "kube-api-access-689n6") pod "401ec682-23bc-4eb9-b0d4-0a9e59a8c28e" (UID: "401ec682-23bc-4eb9-b0d4-0a9e59a8c28e"). InnerVolumeSpecName "kube-api-access-689n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:33:09 crc kubenswrapper[4919]: I0109 14:33:09.916094 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-689n6\" (UniqueName: \"kubernetes.io/projected/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e-kube-api-access-689n6\") on node \"crc\" DevicePath \"\"" Jan 09 14:33:10 crc kubenswrapper[4919]: I0109 14:33:10.573408 4919 scope.go:117] "RemoveContainer" containerID="608955673d36255a58bec2521cf85dce82bf54f66b811257df0060ed74272e08" Jan 09 14:33:10 crc kubenswrapper[4919]: I0109 14:33:10.573443 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/crc-debug-2qxvw" Jan 09 14:33:10 crc kubenswrapper[4919]: I0109 14:33:10.762359 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401ec682-23bc-4eb9-b0d4-0a9e59a8c28e" path="/var/lib/kubelet/pods/401ec682-23bc-4eb9-b0d4-0a9e59a8c28e/volumes" Jan 09 14:33:19 crc kubenswrapper[4919]: I0109 14:33:19.752030 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:33:19 crc kubenswrapper[4919]: E0109 14:33:19.752697 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:33:28 crc kubenswrapper[4919]: I0109 14:33:28.787133 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56f5497b64-ws7gk_f23efa08-cf06-4a61-a081-60b52efe8e8f/barbican-api/0.log" Jan 09 14:33:28 crc kubenswrapper[4919]: I0109 14:33:28.969745 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56f5497b64-ws7gk_f23efa08-cf06-4a61-a081-60b52efe8e8f/barbican-api-log/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.041977 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bc67fd74-frwbh_be2245b9-76ae-4599-ba6a-97e327453f95/barbican-keystone-listener/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.043075 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bc67fd74-frwbh_be2245b9-76ae-4599-ba6a-97e327453f95/barbican-keystone-listener-log/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.211433 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d7884df69-vfc9g_15fcc721-300d-4084-9fbe-756903a4f58b/barbican-worker/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.262447 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d7884df69-vfc9g_15fcc721-300d-4084-9fbe-756903a4f58b/barbican-worker-log/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.403157 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2_2e1540e3-6358-48ae-ac2a-08e90ab54cbb/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.488200 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c31d277-b08a-41e0-9f01-95ea17af82f4/ceilometer-central-agent/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.596030 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c31d277-b08a-41e0-9f01-95ea17af82f4/ceilometer-notification-agent/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.626510 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c31d277-b08a-41e0-9f01-95ea17af82f4/proxy-httpd/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.776419 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c31d277-b08a-41e0-9f01-95ea17af82f4/sg-core/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.866485 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0b8d4fb5-64a0-4774-8f0f-273c476d7b81/cinder-api-log/0.log" Jan 09 14:33:29 crc kubenswrapper[4919]: I0109 14:33:29.900156 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0b8d4fb5-64a0-4774-8f0f-273c476d7b81/cinder-api/0.log" Jan 09 14:33:30 crc kubenswrapper[4919]: I0109 14:33:30.204538 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9637b6f9-f7a2-4056-b9ae-87b4af7e475e/cinder-scheduler/0.log" Jan 09 14:33:30 crc kubenswrapper[4919]: I0109 14:33:30.272052 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9637b6f9-f7a2-4056-b9ae-87b4af7e475e/probe/0.log" Jan 09 14:33:30 crc kubenswrapper[4919]: I0109 14:33:30.377980 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-m8pld_eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:30 crc kubenswrapper[4919]: I0109 14:33:30.549323 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vsw76_6dd14cc5-f2bf-43bc-b3e6-9704c2728708/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:30 crc kubenswrapper[4919]: I0109 14:33:30.567914 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-mbtbk_35d091b1-8210-4d82-bde9-2b14bcfb8227/init/0.log" Jan 09 14:33:30 crc kubenswrapper[4919]: I0109 14:33:30.759971 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:33:30 crc kubenswrapper[4919]: E0109 14:33:30.761081 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:33:30 crc kubenswrapper[4919]: I0109 14:33:30.831038 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-mbtbk_35d091b1-8210-4d82-bde9-2b14bcfb8227/init/0.log" Jan 09 14:33:30 crc kubenswrapper[4919]: I0109 14:33:30.865600 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-mbtbk_35d091b1-8210-4d82-bde9-2b14bcfb8227/dnsmasq-dns/0.log" Jan 09 14:33:30 crc kubenswrapper[4919]: I0109 14:33:30.871067 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc_3004c02a-530a-44c4-98b4-825dbb64296f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:31 crc kubenswrapper[4919]: I0109 14:33:31.097657 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_58571fe0-89fb-41ed-a3eb-b04d6224dd1d/glance-httpd/0.log" Jan 09 14:33:31 crc kubenswrapper[4919]: I0109 14:33:31.116741 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_58571fe0-89fb-41ed-a3eb-b04d6224dd1d/glance-log/0.log" Jan 09 14:33:31 crc kubenswrapper[4919]: I0109 14:33:31.337743 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_735040be-a013-45ef-a590-2819585ea47c/glance-httpd/0.log" Jan 09 14:33:31 crc kubenswrapper[4919]: I0109 14:33:31.355865 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_735040be-a013-45ef-a590-2819585ea47c/glance-log/0.log" Jan 09 14:33:31 crc kubenswrapper[4919]: I0109 14:33:31.425459 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75dd96cc4d-xnspb_db2aeda5-21fd-4b61-bb59-d8d0b78884c2/horizon/0.log" Jan 09 14:33:31 crc kubenswrapper[4919]: I0109 14:33:31.716418 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt_f7e5dde7-0e67-4c31-83c6-9946c5b23755/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:31 crc kubenswrapper[4919]: I0109 14:33:31.893199 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75dd96cc4d-xnspb_db2aeda5-21fd-4b61-bb59-d8d0b78884c2/horizon-log/0.log" Jan 09 14:33:31 crc kubenswrapper[4919]: I0109 14:33:31.935246 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-b9fzw_d079d443-cf8c-47ff-96d9-a3fe59583ad8/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:32 crc kubenswrapper[4919]: I0109 14:33:32.116304 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6575bd5545-2lr88_22246922-04ad-4013-a96a-71e00093dbed/keystone-api/0.log" Jan 09 14:33:32 crc kubenswrapper[4919]: I0109 14:33:32.167249 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29466121-rx8p6_e8fd615e-ac5c-4caa-8eaf-5c99df3fa111/keystone-cron/0.log" Jan 09 14:33:32 crc kubenswrapper[4919]: I0109 14:33:32.276600 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3e1aa728-2078-4e6c-b738-0bc97b1f14ff/kube-state-metrics/0.log" Jan 09 14:33:32 crc kubenswrapper[4919]: I0109 14:33:32.430860 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-k82m6_acecffca-8dfb-4702-851a-f8dfe2659e98/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:32 crc kubenswrapper[4919]: I0109 14:33:32.827751 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-584b4bc589-6qnkd_b93b1e1b-72fa-443d-ba2c-e9c9920f918a/neutron-httpd/0.log" Jan 09 14:33:32 crc kubenswrapper[4919]: I0109 14:33:32.893696 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld_e9770e19-27d5-49ff-a358-7f455b3e6d8e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:32 crc kubenswrapper[4919]: I0109 14:33:32.898716 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-584b4bc589-6qnkd_b93b1e1b-72fa-443d-ba2c-e9c9920f918a/neutron-api/0.log" Jan 09 14:33:33 crc kubenswrapper[4919]: I0109 14:33:33.487274 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a/nova-api-log/0.log" Jan 09 14:33:33 crc kubenswrapper[4919]: I0109 14:33:33.659232 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a/nova-api-api/0.log" Jan 09 14:33:33 crc kubenswrapper[4919]: I0109 14:33:33.664341 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b339d912-b884-4fd0-8b93-c21c2b6ce58c/nova-cell0-conductor-conductor/0.log" Jan 09 14:33:33 crc kubenswrapper[4919]: I0109 14:33:33.873190 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8c9fed7c-6744-4cce-b80c-21ef4352ca7b/nova-cell1-conductor-conductor/0.log" Jan 09 14:33:34 crc kubenswrapper[4919]: I0109 14:33:34.014551 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4402784e-5d9b-4d52-86a8-57dc43cc2917/nova-cell1-novncproxy-novncproxy/0.log" Jan 09 14:33:34 crc kubenswrapper[4919]: I0109 14:33:34.170080 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-v9kt9_cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:34 crc kubenswrapper[4919]: I0109 14:33:34.381799 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_10d389ef-fb74-406c-a1cb-8a591b708726/nova-metadata-log/0.log" Jan 09 14:33:34 crc kubenswrapper[4919]: I0109 14:33:34.622352 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c8b56a5e-6bc1-4366-87e6-81d8e4b8100b/nova-scheduler-scheduler/0.log" Jan 09 14:33:34 crc kubenswrapper[4919]: I0109 14:33:34.693594 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a078e997-b08e-44a9-89a7-bf2fe9eaed11/mysql-bootstrap/0.log" Jan 09 14:33:34 crc kubenswrapper[4919]: I0109 14:33:34.820898 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a078e997-b08e-44a9-89a7-bf2fe9eaed11/mysql-bootstrap/0.log" Jan 09 14:33:34 crc kubenswrapper[4919]: I0109 14:33:34.896281 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a078e997-b08e-44a9-89a7-bf2fe9eaed11/galera/0.log" Jan 09 14:33:35 crc kubenswrapper[4919]: I0109 14:33:35.982451 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3d0c2080-b1ea-4ff9-ad51-d970cce81d56/mysql-bootstrap/0.log" Jan 09 14:33:36 crc kubenswrapper[4919]: I0109 14:33:36.121981 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3d0c2080-b1ea-4ff9-ad51-d970cce81d56/mysql-bootstrap/0.log" Jan 09 14:33:36 crc kubenswrapper[4919]: I0109 14:33:36.204054 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3d0c2080-b1ea-4ff9-ad51-d970cce81d56/galera/0.log" Jan 09 14:33:36 crc kubenswrapper[4919]: I0109 14:33:36.325358 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_284d399b-7c07-4e99-9a95-32d600fab162/openstackclient/0.log" Jan 09 14:33:36 crc kubenswrapper[4919]: I0109 14:33:36.499088 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_10d389ef-fb74-406c-a1cb-8a591b708726/nova-metadata-metadata/0.log" Jan 09 14:33:36 crc kubenswrapper[4919]: I0109 14:33:36.564229 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fdp27_9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6/openstack-network-exporter/0.log" Jan 09 14:33:36 crc kubenswrapper[4919]: I0109 14:33:36.769684 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-n9g6d_088a3f18-0aab-4042-b674-752c23ed3ac3/ovn-controller/0.log" Jan 09 14:33:36 crc kubenswrapper[4919]: I0109 14:33:36.814176 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrsng_91789be0-3c6f-46d6-a222-d75d49e63662/ovsdb-server-init/0.log" Jan 09 14:33:37 crc kubenswrapper[4919]: I0109 14:33:37.043752 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrsng_91789be0-3c6f-46d6-a222-d75d49e63662/ovs-vswitchd/0.log" Jan 09 14:33:37 crc kubenswrapper[4919]: I0109 14:33:37.084590 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrsng_91789be0-3c6f-46d6-a222-d75d49e63662/ovsdb-server-init/0.log" Jan 09 14:33:37 crc kubenswrapper[4919]: I0109 14:33:37.102531 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrsng_91789be0-3c6f-46d6-a222-d75d49e63662/ovsdb-server/0.log" Jan 09 14:33:37 crc kubenswrapper[4919]: I0109 14:33:37.522699 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-8nq44_527824ae-c763-4efc-ba39-1cd36664996f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:37 crc kubenswrapper[4919]: I0109 14:33:37.535197 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68449649-bcc2-41c2-9a6a-a91452a48282/ovn-northd/0.log" Jan 09 14:33:37 crc kubenswrapper[4919]: I0109 14:33:37.546683 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68449649-bcc2-41c2-9a6a-a91452a48282/openstack-network-exporter/0.log" Jan 09 14:33:37 crc kubenswrapper[4919]: I0109 14:33:37.742803 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_80e0f01c-3e7c-456d-ae74-276ef085ff36/ovsdbserver-nb/0.log" Jan 09 14:33:37 crc kubenswrapper[4919]: I0109 14:33:37.840879 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_80e0f01c-3e7c-456d-ae74-276ef085ff36/openstack-network-exporter/0.log" Jan 09 14:33:37 crc kubenswrapper[4919]: I0109 14:33:37.989017 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_62681dab-a75d-4270-bb2f-c8f963838172/openstack-network-exporter/0.log" Jan 09 14:33:38 crc kubenswrapper[4919]: I0109 14:33:38.037342 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_62681dab-a75d-4270-bb2f-c8f963838172/ovsdbserver-sb/0.log" Jan 09 14:33:38 crc kubenswrapper[4919]: I0109 14:33:38.179680 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74bbf9c4b-kjq9x_aafcf4ee-61ee-448a-91d4-d3b215b2c42e/placement-api/0.log" Jan 09 14:33:38 crc kubenswrapper[4919]: I0109 14:33:38.302135 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74bbf9c4b-kjq9x_aafcf4ee-61ee-448a-91d4-d3b215b2c42e/placement-log/0.log" Jan 09 14:33:38 crc kubenswrapper[4919]: I0109 14:33:38.408907 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_196a3f64-983f-4369-93cf-9501a68ee8a4/setup-container/0.log" Jan 09 14:33:38 crc kubenswrapper[4919]: I0109 14:33:38.576677 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_196a3f64-983f-4369-93cf-9501a68ee8a4/rabbitmq/0.log" Jan 09 14:33:38 crc kubenswrapper[4919]: I0109 14:33:38.593389 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_196a3f64-983f-4369-93cf-9501a68ee8a4/setup-container/0.log" Jan 09 14:33:38 crc kubenswrapper[4919]: I0109 14:33:38.672872 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7239a87a-aba2-4367-b1c3-2800f1a130d8/setup-container/0.log" Jan 09 14:33:38 crc kubenswrapper[4919]: I0109 14:33:38.852025 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7239a87a-aba2-4367-b1c3-2800f1a130d8/setup-container/0.log" Jan 09 14:33:38 crc kubenswrapper[4919]: I0109 14:33:38.929773 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8_781cfeb4-857a-490b-a97e-02bcadab1886/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:38 crc kubenswrapper[4919]: I0109 14:33:38.930786 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7239a87a-aba2-4367-b1c3-2800f1a130d8/rabbitmq/0.log" Jan 09 14:33:39 crc kubenswrapper[4919]: I0109 14:33:39.178385 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-ghk4m_167890d2-4e03-4537-a339-d4efc3b64c54/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:39 crc kubenswrapper[4919]: I0109 14:33:39.259636 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj_6ff771e7-314f-493f-b5e8-fe2eb503aa52/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:39 crc kubenswrapper[4919]: I0109 14:33:39.478927 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-crhq6_1e8137a4-0169-4f73-b616-6a0554aa426f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:39 crc kubenswrapper[4919]: I0109 14:33:39.534165 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jvwq2_a7fb05e2-9059-4447-8ed5-f125411a7fdc/ssh-known-hosts-edpm-deployment/0.log" Jan 09 14:33:39 crc kubenswrapper[4919]: I0109 14:33:39.772632 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5f95dfdc65-kz6rq_e09e5f52-5a74-4a7c-bd84-079835a21fec/proxy-server/0.log" Jan 09 14:33:39 crc kubenswrapper[4919]: I0109 14:33:39.908325 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5f95dfdc65-kz6rq_e09e5f52-5a74-4a7c-bd84-079835a21fec/proxy-httpd/0.log" Jan 09 14:33:39 crc kubenswrapper[4919]: I0109 14:33:39.943048 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7lmg7_b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7/swift-ring-rebalance/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.053846 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/account-auditor/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.152277 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/account-reaper/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.162029 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/account-replicator/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.289767 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/account-server/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.337596 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/container-auditor/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.404645 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/container-replicator/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.467304 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/container-server/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.538144 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/object-auditor/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.572347 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/container-updater/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.660767 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/object-expirer/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.673176 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/object-replicator/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.765450 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/object-server/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.853663 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/object-updater/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.923388 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/rsync/0.log" Jan 09 14:33:40 crc kubenswrapper[4919]: I0109 14:33:40.941709 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/swift-recon-cron/0.log" Jan 09 14:33:41 crc kubenswrapper[4919]: I0109 14:33:41.196507 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6_1397ace9-1e0e-4acc-b043-3e1f13244746/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:41 crc kubenswrapper[4919]: I0109 14:33:41.236956 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f53c17d7-be4d-4bcf-aea4-2617abf3d9ea/tempest-tests-tempest-tests-runner/0.log" Jan 09 14:33:41 crc kubenswrapper[4919]: I0109 14:33:41.479601 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_77aedacd-c1c9-4ee5-836d-69b929d4f842/test-operator-logs-container/0.log" Jan 09 14:33:41 crc kubenswrapper[4919]: I0109 14:33:41.513703 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5_89e73a14-acf2-4c6b-94de-a8857e0cf22d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:33:41 crc kubenswrapper[4919]: I0109 14:33:41.751420 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:33:41 crc kubenswrapper[4919]: E0109 14:33:41.751700 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:33:49 crc kubenswrapper[4919]: I0109 14:33:49.203766 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8/memcached/0.log" Jan 09 14:33:54 crc kubenswrapper[4919]: I0109 14:33:54.752170 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:33:54 crc kubenswrapper[4919]: E0109 14:33:54.752831 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:34:05 crc kubenswrapper[4919]: I0109 14:34:05.522524 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/util/0.log" Jan 09 14:34:05 crc kubenswrapper[4919]: I0109 14:34:05.720667 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/util/0.log" Jan 09 14:34:05 crc kubenswrapper[4919]: I0109 14:34:05.722024 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/pull/0.log" Jan 09 14:34:05 crc kubenswrapper[4919]: I0109 14:34:05.769693 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/pull/0.log" Jan 09 14:34:05 crc kubenswrapper[4919]: I0109 14:34:05.924347 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/pull/0.log" Jan 09 14:34:05 crc kubenswrapper[4919]: I0109 14:34:05.930066 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/util/0.log" Jan 09 14:34:06 crc kubenswrapper[4919]: I0109 14:34:06.009028 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/extract/0.log" Jan 09 14:34:06 crc kubenswrapper[4919]: I0109 14:34:06.159603 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-h6cp9_d0081380-9d2e-40bb-8cc9-f124d4fbfd25/manager/0.log" Jan 09 14:34:06 crc kubenswrapper[4919]: I0109 14:34:06.251620 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-m56bk_276f41de-c875-40be-816a-84eb02212fda/manager/0.log" Jan 09 14:34:06 crc kubenswrapper[4919]: I0109 14:34:06.355806 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-wxt2z_b46937ef-2f83-4864-b0d4-5464ed82e1b8/manager/0.log" Jan 09 14:34:06 crc kubenswrapper[4919]: I0109 14:34:06.553851 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-vvsj9_7635e70a-4259-4c43-91b7-eae6fc0d3c12/manager/0.log" Jan 09 14:34:06 crc kubenswrapper[4919]: I0109 14:34:06.554097 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-s46b7_7716ced4-dfb9-4a5c-936f-65edbf78f5dd/manager/0.log" Jan 09 14:34:06 crc kubenswrapper[4919]: I0109 14:34:06.718183 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-f2drg_60feaa4f-ca73-4e59-a85f-c17132f8f708/manager/0.log" Jan 09 14:34:06 crc kubenswrapper[4919]: I0109 14:34:06.751664 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:34:06 crc kubenswrapper[4919]: E0109 14:34:06.752345 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:34:06 crc kubenswrapper[4919]: I0109 14:34:06.967459 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-4r7j8_4d08a973-3a9e-4098-95fd-d314d9f4e1af/manager/0.log" Jan 09 14:34:06 crc kubenswrapper[4919]: I0109 14:34:06.977405 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-6s6wp_af1be546-436f-43ef-b748-22860362f61e/manager/0.log" Jan 09 14:34:07 crc kubenswrapper[4919]: I0109 14:34:07.122747 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-r5j45_33efa14f-00b9-49b4-bc2a-5c0c13d60613/manager/0.log" Jan 09 14:34:07 crc kubenswrapper[4919]: I0109 14:34:07.179177 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-cd2dq_53cc8efc-85ec-4ddf-82c5-c1db01fe8120/manager/0.log" Jan 09 14:34:07 crc kubenswrapper[4919]: I0109 14:34:07.423268 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-9bn9t_37ea4d3a-1d7d-47b2-8eee-1a7601c2de24/manager/0.log" Jan 09 14:34:07 crc kubenswrapper[4919]: I0109 14:34:07.498122 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-jl5xm_55fe5bfd-cc48-498b-88f7-789a3048a743/manager/0.log" Jan 09 14:34:07 crc kubenswrapper[4919]: I0109 14:34:07.602832 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-4ppq5_2bf404b6-0f77-4a02-a45a-ad46980755cb/manager/0.log" Jan 09 14:34:07 crc kubenswrapper[4919]: I0109 14:34:07.676097 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-jl878_19ebcfcf-3a6a-4c2c-ab15-2239e08bca09/manager/0.log" Jan 09 14:34:07 crc kubenswrapper[4919]: I0109 14:34:07.803258 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-75f6ff484-ll94k_488f8708-4c49-429f-9697-a00b8fadd486/manager/0.log" Jan 09 14:34:08 crc kubenswrapper[4919]: I0109 14:34:08.440495 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6rw4t_937fd694-383a-4377-a061-2c3711482e98/registry-server/0.log" Jan 09 14:34:08 crc kubenswrapper[4919]: I0109 14:34:08.564313 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6954755664-nmm8h_2ebbd42e-c3b8-4e1c-b4ee-bf9316669667/operator/0.log" Jan 09 14:34:08 crc kubenswrapper[4919]: I0109 14:34:08.681816 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-wnwmg_7c5b2e5b-6474-46f3-861b-aba8d47c714b/manager/0.log" Jan 09 14:34:08 crc kubenswrapper[4919]: I0109 14:34:08.855704 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-8kjrk_58f271ce-d537-4588-ba66-53f08136ee13/manager/0.log" Jan 09 14:34:08 crc kubenswrapper[4919]: I0109 14:34:08.997602 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qg5n7_e9f24ed0-e850-4906-901d-b23777cf500f/operator/0.log" Jan 09 14:34:09 crc kubenswrapper[4919]: I0109 14:34:09.248469 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-47s64_782f359d-9941-4528-851a-4db3673cb439/manager/0.log" Jan 09 14:34:09 crc kubenswrapper[4919]: I0109 14:34:09.301072 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fb94578dd-p4xfn_e77d7646-4198-42f3-ac22-f0974b18a0ab/manager/0.log" Jan 09 14:34:09 crc kubenswrapper[4919]: I0109 14:34:09.368442 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-wzww9_5bd72cd8-70f2-45ef-a451-8468e79eaca9/manager/0.log" Jan 09 14:34:09 crc kubenswrapper[4919]: I0109 14:34:09.486183 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-84x8m_7c1ac56d-4f45-4102-8336-2cec59c44d9d/manager/0.log" Jan 09 14:34:09 crc kubenswrapper[4919]: I0109 14:34:09.550029 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-nk5sx_4f5bfa64-2b7e-4b30-aedc-56cd44f47032/manager/0.log" Jan 09 14:34:19 crc kubenswrapper[4919]: I0109 14:34:19.751676 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:34:19 crc kubenswrapper[4919]: E0109 14:34:19.752388 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:34:27 crc kubenswrapper[4919]: I0109 14:34:27.636003 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-twpss_5b27b30e-8a1e-4c12-ad5a-530c640bf23d/control-plane-machine-set-operator/0.log" Jan 09 14:34:27 crc kubenswrapper[4919]: I0109 14:34:27.844931 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7lrzs_73189faa-e786-4c46-b23e-c9e58d6b0490/kube-rbac-proxy/0.log" Jan 09 14:34:27 crc kubenswrapper[4919]: I0109 14:34:27.854651 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7lrzs_73189faa-e786-4c46-b23e-c9e58d6b0490/machine-api-operator/0.log" Jan 09 14:34:31 crc kubenswrapper[4919]: I0109 14:34:31.751671 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:34:31 crc kubenswrapper[4919]: E0109 14:34:31.752182 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:34:39 crc kubenswrapper[4919]: I0109 14:34:39.821378 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ptg84_64fd850e-9282-4070-8467-aa5b8c498787/cert-manager-controller/0.log" Jan 09 14:34:39 crc kubenswrapper[4919]: I0109 14:34:39.953113 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-pnfgs_6afdfa72-d547-4051-9c95-fd83fd88ff93/cert-manager-webhook/0.log" Jan 09 14:34:39 crc kubenswrapper[4919]: I0109 14:34:39.957241 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-29hn2_51952aec-f115-4d09-a7f4-56dcc9f6222c/cert-manager-cainjector/0.log" Jan 09 14:34:45 crc kubenswrapper[4919]: I0109 14:34:45.753824 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:34:45 crc kubenswrapper[4919]: E0109 14:34:45.754699 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:34:52 crc kubenswrapper[4919]: I0109 14:34:52.108692 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-vh2fh_0964f707-3143-4f9c-a31c-ce8f14e1fd2f/nmstate-console-plugin/0.log" Jan 09 14:34:52 crc kubenswrapper[4919]: I0109 14:34:52.305231 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9wzzm_9dd4fea4-6753-4012-a325-c7065f93a092/nmstate-handler/0.log" Jan 09 14:34:52 crc kubenswrapper[4919]: I0109 14:34:52.426709 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-hr7w5_91ddb4d0-422b-47f1-9279-fd2bef6bcd19/kube-rbac-proxy/0.log" Jan 09 14:34:52 crc kubenswrapper[4919]: I0109 14:34:52.435650 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-hr7w5_91ddb4d0-422b-47f1-9279-fd2bef6bcd19/nmstate-metrics/0.log" Jan 09 14:34:52 crc kubenswrapper[4919]: I0109 14:34:52.523868 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-rqxgb_feaf998d-058f-4630-84eb-a1e5692b6c6b/nmstate-operator/0.log" Jan 09 14:34:52 crc kubenswrapper[4919]: I0109 14:34:52.654251 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-v8957_5be7743f-eb29-453e-a4cb-58c25d8d24bd/nmstate-webhook/0.log" Jan 09 14:34:56 crc kubenswrapper[4919]: I0109 14:34:56.751694 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:34:56 crc kubenswrapper[4919]: E0109 14:34:56.753682 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:35:04 crc kubenswrapper[4919]: I0109 14:35:04.744950 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rgtqn"] Jan 09 14:35:04 crc kubenswrapper[4919]: E0109 14:35:04.746185 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401ec682-23bc-4eb9-b0d4-0a9e59a8c28e" containerName="container-00" Jan 09 14:35:04 crc kubenswrapper[4919]: I0109 14:35:04.746200 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="401ec682-23bc-4eb9-b0d4-0a9e59a8c28e" containerName="container-00" Jan 09 14:35:04 crc kubenswrapper[4919]: I0109 14:35:04.746445 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="401ec682-23bc-4eb9-b0d4-0a9e59a8c28e" containerName="container-00" Jan 09 14:35:04 crc kubenswrapper[4919]: I0109 14:35:04.747909 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:04 crc kubenswrapper[4919]: I0109 14:35:04.773548 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rgtqn"] Jan 09 14:35:04 crc kubenswrapper[4919]: I0109 14:35:04.902392 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-utilities\") pod \"certified-operators-rgtqn\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:04 crc kubenswrapper[4919]: I0109 14:35:04.902503 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-catalog-content\") pod \"certified-operators-rgtqn\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:04 crc kubenswrapper[4919]: I0109 14:35:04.902571 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td4x9\" (UniqueName: \"kubernetes.io/projected/b7897e9d-860d-4930-a172-8b9ccdc05e0b-kube-api-access-td4x9\") pod \"certified-operators-rgtqn\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:05 crc kubenswrapper[4919]: I0109 14:35:05.006681 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-utilities\") pod \"certified-operators-rgtqn\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:05 crc kubenswrapper[4919]: I0109 14:35:05.007084 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-catalog-content\") pod \"certified-operators-rgtqn\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:05 crc kubenswrapper[4919]: I0109 14:35:05.007139 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td4x9\" (UniqueName: \"kubernetes.io/projected/b7897e9d-860d-4930-a172-8b9ccdc05e0b-kube-api-access-td4x9\") pod \"certified-operators-rgtqn\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:05 crc kubenswrapper[4919]: I0109 14:35:05.008051 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-utilities\") pod \"certified-operators-rgtqn\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:05 crc kubenswrapper[4919]: I0109 14:35:05.008350 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-catalog-content\") pod \"certified-operators-rgtqn\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:05 crc kubenswrapper[4919]: I0109 14:35:05.038019 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td4x9\" (UniqueName: \"kubernetes.io/projected/b7897e9d-860d-4930-a172-8b9ccdc05e0b-kube-api-access-td4x9\") pod \"certified-operators-rgtqn\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:05 crc kubenswrapper[4919]: I0109 14:35:05.072179 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:05 crc kubenswrapper[4919]: I0109 14:35:05.481338 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rgtqn"] Jan 09 14:35:05 crc kubenswrapper[4919]: I0109 14:35:05.668011 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgtqn" event={"ID":"b7897e9d-860d-4930-a172-8b9ccdc05e0b","Type":"ContainerStarted","Data":"acae66843a0564101580d04928217185112f7ae3511b95d4da14be646058ab3a"} Jan 09 14:35:06 crc kubenswrapper[4919]: I0109 14:35:06.678954 4919 generic.go:334] "Generic (PLEG): container finished" podID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerID="8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca" exitCode=0 Jan 09 14:35:06 crc kubenswrapper[4919]: I0109 14:35:06.679082 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgtqn" event={"ID":"b7897e9d-860d-4930-a172-8b9ccdc05e0b","Type":"ContainerDied","Data":"8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca"} Jan 09 14:35:07 crc kubenswrapper[4919]: I0109 14:35:07.690087 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgtqn" event={"ID":"b7897e9d-860d-4930-a172-8b9ccdc05e0b","Type":"ContainerStarted","Data":"97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687"} Jan 09 14:35:08 crc kubenswrapper[4919]: I0109 14:35:08.700662 4919 generic.go:334] "Generic (PLEG): container finished" podID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerID="97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687" exitCode=0 Jan 09 14:35:08 crc kubenswrapper[4919]: I0109 14:35:08.700708 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgtqn" event={"ID":"b7897e9d-860d-4930-a172-8b9ccdc05e0b","Type":"ContainerDied","Data":"97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687"} Jan 09 14:35:08 crc kubenswrapper[4919]: I0109 14:35:08.751761 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:35:08 crc kubenswrapper[4919]: E0109 14:35:08.752058 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:35:11 crc kubenswrapper[4919]: I0109 14:35:11.729928 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgtqn" event={"ID":"b7897e9d-860d-4930-a172-8b9ccdc05e0b","Type":"ContainerStarted","Data":"c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e"} Jan 09 14:35:11 crc kubenswrapper[4919]: I0109 14:35:11.755788 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rgtqn" podStartSLOduration=3.084520674 podStartE2EDuration="7.755763209s" podCreationTimestamp="2026-01-09 14:35:04 +0000 UTC" firstStartedPulling="2026-01-09 14:35:06.681132037 +0000 UTC m=+3886.228971487" lastFinishedPulling="2026-01-09 14:35:11.352374572 +0000 UTC m=+3890.900214022" observedRunningTime="2026-01-09 14:35:11.750755464 +0000 UTC m=+3891.298594914" watchObservedRunningTime="2026-01-09 14:35:11.755763209 +0000 UTC m=+3891.303602659" Jan 09 14:35:15 crc kubenswrapper[4919]: I0109 14:35:15.073239 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:15 crc kubenswrapper[4919]: I0109 14:35:15.073857 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:15 crc kubenswrapper[4919]: I0109 14:35:15.118770 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.208073 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-grs8k_256aa53e-2a76-437e-ac55-a8766f9e5c00/kube-rbac-proxy/0.log" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.321352 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-grs8k_256aa53e-2a76-437e-ac55-a8766f9e5c00/controller/0.log" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.492871 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-frr-files/0.log" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.698048 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-frr-files/0.log" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.734279 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-reloader/0.log" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.742050 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-metrics/0.log" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.756323 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-reloader/0.log" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.901845 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-frr-files/0.log" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.945736 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-reloader/0.log" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.949970 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-metrics/0.log" Jan 09 14:35:19 crc kubenswrapper[4919]: I0109 14:35:19.964103 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-metrics/0.log" Jan 09 14:35:20 crc kubenswrapper[4919]: I0109 14:35:20.137496 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-reloader/0.log" Jan 09 14:35:20 crc kubenswrapper[4919]: I0109 14:35:20.158448 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-metrics/0.log" Jan 09 14:35:20 crc kubenswrapper[4919]: I0109 14:35:20.179523 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-frr-files/0.log" Jan 09 14:35:20 crc kubenswrapper[4919]: I0109 14:35:20.180139 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/controller/0.log" Jan 09 14:35:20 crc kubenswrapper[4919]: I0109 14:35:20.370249 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/frr-metrics/0.log" Jan 09 14:35:20 crc kubenswrapper[4919]: I0109 14:35:20.386794 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/kube-rbac-proxy/0.log" Jan 09 14:35:20 crc kubenswrapper[4919]: I0109 14:35:20.406609 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/kube-rbac-proxy-frr/0.log" Jan 09 14:35:20 crc kubenswrapper[4919]: I0109 14:35:20.605090 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/reloader/0.log" Jan 09 14:35:20 crc kubenswrapper[4919]: I0109 14:35:20.610077 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-wt2zf_9b452f91-af7c-48e8-b137-3c39a355305a/frr-k8s-webhook-server/0.log" Jan 09 14:35:20 crc kubenswrapper[4919]: I0109 14:35:20.890082 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bdcf498b5-twbl9_df525dd0-f23f-4348-a4e0-4330e0d9ad91/manager/0.log" Jan 09 14:35:21 crc kubenswrapper[4919]: I0109 14:35:21.141878 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-56d5fdcf86-2jwkb_0cb9da00-2fea-4925-b3ef-c9438a2b5c18/webhook-server/0.log" Jan 09 14:35:21 crc kubenswrapper[4919]: I0109 14:35:21.165446 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6kcvb_33ed1894-533c-4314-b01c-758a5c2eebf8/kube-rbac-proxy/0.log" Jan 09 14:35:21 crc kubenswrapper[4919]: I0109 14:35:21.764096 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6kcvb_33ed1894-533c-4314-b01c-758a5c2eebf8/speaker/0.log" Jan 09 14:35:21 crc kubenswrapper[4919]: I0109 14:35:21.818166 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/frr/0.log" Jan 09 14:35:23 crc kubenswrapper[4919]: I0109 14:35:23.751825 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:35:23 crc kubenswrapper[4919]: E0109 14:35:23.752584 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:35:25 crc kubenswrapper[4919]: I0109 14:35:25.169416 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:25 crc kubenswrapper[4919]: I0109 14:35:25.249721 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rgtqn"] Jan 09 14:35:25 crc kubenswrapper[4919]: I0109 14:35:25.856506 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rgtqn" podUID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerName="registry-server" containerID="cri-o://c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e" gracePeriod=2 Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.334785 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.503493 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-utilities\") pod \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.504240 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-utilities" (OuterVolumeSpecName: "utilities") pod "b7897e9d-860d-4930-a172-8b9ccdc05e0b" (UID: "b7897e9d-860d-4930-a172-8b9ccdc05e0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.504356 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td4x9\" (UniqueName: \"kubernetes.io/projected/b7897e9d-860d-4930-a172-8b9ccdc05e0b-kube-api-access-td4x9\") pod \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.505222 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-catalog-content\") pod \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\" (UID: \"b7897e9d-860d-4930-a172-8b9ccdc05e0b\") " Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.505715 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.519626 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7897e9d-860d-4930-a172-8b9ccdc05e0b-kube-api-access-td4x9" (OuterVolumeSpecName: "kube-api-access-td4x9") pod "b7897e9d-860d-4930-a172-8b9ccdc05e0b" (UID: "b7897e9d-860d-4930-a172-8b9ccdc05e0b"). InnerVolumeSpecName "kube-api-access-td4x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.553636 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7897e9d-860d-4930-a172-8b9ccdc05e0b" (UID: "b7897e9d-860d-4930-a172-8b9ccdc05e0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.608182 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td4x9\" (UniqueName: \"kubernetes.io/projected/b7897e9d-860d-4930-a172-8b9ccdc05e0b-kube-api-access-td4x9\") on node \"crc\" DevicePath \"\"" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.608257 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7897e9d-860d-4930-a172-8b9ccdc05e0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.866781 4919 generic.go:334] "Generic (PLEG): container finished" podID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerID="c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e" exitCode=0 Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.866850 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgtqn" event={"ID":"b7897e9d-860d-4930-a172-8b9ccdc05e0b","Type":"ContainerDied","Data":"c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e"} Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.866887 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgtqn" event={"ID":"b7897e9d-860d-4930-a172-8b9ccdc05e0b","Type":"ContainerDied","Data":"acae66843a0564101580d04928217185112f7ae3511b95d4da14be646058ab3a"} Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.866909 4919 scope.go:117] "RemoveContainer" containerID="c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.866949 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgtqn" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.899227 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rgtqn"] Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.900287 4919 scope.go:117] "RemoveContainer" containerID="97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.909099 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rgtqn"] Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.925967 4919 scope.go:117] "RemoveContainer" containerID="8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.980205 4919 scope.go:117] "RemoveContainer" containerID="c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e" Jan 09 14:35:26 crc kubenswrapper[4919]: E0109 14:35:26.980712 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e\": container with ID starting with c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e not found: ID does not exist" containerID="c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.980745 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e"} err="failed to get container status \"c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e\": rpc error: code = NotFound desc = could not find container \"c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e\": container with ID starting with c2b910e5fa3feb6e140e29a6ac49874d61a7c5dece0f080dbacfb0654934009e not found: ID does not exist" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.980768 4919 scope.go:117] "RemoveContainer" containerID="97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687" Jan 09 14:35:26 crc kubenswrapper[4919]: E0109 14:35:26.981268 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687\": container with ID starting with 97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687 not found: ID does not exist" containerID="97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.981288 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687"} err="failed to get container status \"97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687\": rpc error: code = NotFound desc = could not find container \"97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687\": container with ID starting with 97c402e34a046d984511c14f1fe350271f5ceba04f72e1d065368b0efa230687 not found: ID does not exist" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.981300 4919 scope.go:117] "RemoveContainer" containerID="8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca" Jan 09 14:35:26 crc kubenswrapper[4919]: E0109 14:35:26.981560 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca\": container with ID starting with 8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca not found: ID does not exist" containerID="8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca" Jan 09 14:35:26 crc kubenswrapper[4919]: I0109 14:35:26.981581 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca"} err="failed to get container status \"8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca\": rpc error: code = NotFound desc = could not find container \"8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca\": container with ID starting with 8571d16a61e1f0e28b70aa8b573269ee7a0350e67916d3d223e1924a4045daca not found: ID does not exist" Jan 09 14:35:28 crc kubenswrapper[4919]: I0109 14:35:28.764225 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" path="/var/lib/kubelet/pods/b7897e9d-860d-4930-a172-8b9ccdc05e0b/volumes" Jan 09 14:35:36 crc kubenswrapper[4919]: I0109 14:35:36.606158 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/util/0.log" Jan 09 14:35:36 crc kubenswrapper[4919]: I0109 14:35:36.709741 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/util/0.log" Jan 09 14:35:36 crc kubenswrapper[4919]: I0109 14:35:36.815990 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/pull/0.log" Jan 09 14:35:36 crc kubenswrapper[4919]: I0109 14:35:36.848633 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/pull/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.014327 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/util/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.059854 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/pull/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.109734 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/extract/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.222960 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/util/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.390480 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/util/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.443489 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/pull/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.470791 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/pull/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.656656 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/pull/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.658164 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/util/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.676811 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/extract/0.log" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.751952 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:35:37 crc kubenswrapper[4919]: E0109 14:35:37.752231 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:35:37 crc kubenswrapper[4919]: I0109 14:35:37.848148 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-utilities/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.016258 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-content/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.048354 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-content/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.053374 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-utilities/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.227030 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-content/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.234021 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-utilities/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.407471 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-utilities/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.700253 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-utilities/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.784435 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-content/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.797592 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-content/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.915961 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/registry-server/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.949039 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-utilities/0.log" Jan 09 14:35:38 crc kubenswrapper[4919]: I0109 14:35:38.979906 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-content/0.log" Jan 09 14:35:39 crc kubenswrapper[4919]: I0109 14:35:39.184761 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-46q7s_c1290e54-d4c8-4911-a121-762fffa39a66/marketplace-operator/0.log" Jan 09 14:35:39 crc kubenswrapper[4919]: I0109 14:35:39.409420 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-utilities/0.log" Jan 09 14:35:39 crc kubenswrapper[4919]: I0109 14:35:39.502520 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/registry-server/0.log" Jan 09 14:35:39 crc kubenswrapper[4919]: I0109 14:35:39.649478 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-utilities/0.log" Jan 09 14:35:39 crc kubenswrapper[4919]: I0109 14:35:39.683319 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-content/0.log" Jan 09 14:35:39 crc kubenswrapper[4919]: I0109 14:35:39.694601 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-content/0.log" Jan 09 14:35:39 crc kubenswrapper[4919]: I0109 14:35:39.827418 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-utilities/0.log" Jan 09 14:35:39 crc kubenswrapper[4919]: I0109 14:35:39.850512 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-content/0.log" Jan 09 14:35:40 crc kubenswrapper[4919]: I0109 14:35:40.018727 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/registry-server/0.log" Jan 09 14:35:40 crc kubenswrapper[4919]: I0109 14:35:40.055303 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-utilities/0.log" Jan 09 14:35:40 crc kubenswrapper[4919]: I0109 14:35:40.195544 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-content/0.log" Jan 09 14:35:40 crc kubenswrapper[4919]: I0109 14:35:40.205929 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-utilities/0.log" Jan 09 14:35:40 crc kubenswrapper[4919]: I0109 14:35:40.243890 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-content/0.log" Jan 09 14:35:40 crc kubenswrapper[4919]: I0109 14:35:40.404354 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-utilities/0.log" Jan 09 14:35:40 crc kubenswrapper[4919]: I0109 14:35:40.504277 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-content/0.log" Jan 09 14:35:41 crc kubenswrapper[4919]: I0109 14:35:41.069089 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/registry-server/0.log" Jan 09 14:35:50 crc kubenswrapper[4919]: I0109 14:35:50.762642 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:35:50 crc kubenswrapper[4919]: E0109 14:35:50.763412 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:36:05 crc kubenswrapper[4919]: I0109 14:36:05.752264 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:36:05 crc kubenswrapper[4919]: E0109 14:36:05.753032 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:36:20 crc kubenswrapper[4919]: I0109 14:36:20.759025 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:36:20 crc kubenswrapper[4919]: E0109 14:36:20.759740 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.446772 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hnqsm"] Jan 09 14:36:21 crc kubenswrapper[4919]: E0109 14:36:21.447420 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerName="extract-utilities" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.447436 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerName="extract-utilities" Jan 09 14:36:21 crc kubenswrapper[4919]: E0109 14:36:21.447476 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerName="extract-content" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.447482 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerName="extract-content" Jan 09 14:36:21 crc kubenswrapper[4919]: E0109 14:36:21.447496 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerName="registry-server" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.447502 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerName="registry-server" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.447692 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7897e9d-860d-4930-a172-8b9ccdc05e0b" containerName="registry-server" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.449059 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.456239 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnqsm"] Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.590354 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-utilities\") pod \"redhat-operators-hnqsm\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.590450 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-catalog-content\") pod \"redhat-operators-hnqsm\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.590570 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk8qg\" (UniqueName: \"kubernetes.io/projected/a3036031-f449-4d1a-b01f-49c322787f0a-kube-api-access-kk8qg\") pod \"redhat-operators-hnqsm\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.692839 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-utilities\") pod \"redhat-operators-hnqsm\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.693239 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-catalog-content\") pod \"redhat-operators-hnqsm\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.693409 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk8qg\" (UniqueName: \"kubernetes.io/projected/a3036031-f449-4d1a-b01f-49c322787f0a-kube-api-access-kk8qg\") pod \"redhat-operators-hnqsm\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.693626 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-utilities\") pod \"redhat-operators-hnqsm\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.693709 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-catalog-content\") pod \"redhat-operators-hnqsm\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.722479 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk8qg\" (UniqueName: \"kubernetes.io/projected/a3036031-f449-4d1a-b01f-49c322787f0a-kube-api-access-kk8qg\") pod \"redhat-operators-hnqsm\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:21 crc kubenswrapper[4919]: I0109 14:36:21.773008 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:22 crc kubenswrapper[4919]: I0109 14:36:22.300111 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnqsm"] Jan 09 14:36:22 crc kubenswrapper[4919]: I0109 14:36:22.381524 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnqsm" event={"ID":"a3036031-f449-4d1a-b01f-49c322787f0a","Type":"ContainerStarted","Data":"e42baa1055ab560a0c5ede33afa58c0cf0f098a35617cab52554313bf0a141c2"} Jan 09 14:36:23 crc kubenswrapper[4919]: I0109 14:36:23.392858 4919 generic.go:334] "Generic (PLEG): container finished" podID="a3036031-f449-4d1a-b01f-49c322787f0a" containerID="ddb0a494bc16bb0c71ba5f0f8a0f5a75f9e330368e8b1d481829b29ca3a8bb4a" exitCode=0 Jan 09 14:36:23 crc kubenswrapper[4919]: I0109 14:36:23.393087 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnqsm" event={"ID":"a3036031-f449-4d1a-b01f-49c322787f0a","Type":"ContainerDied","Data":"ddb0a494bc16bb0c71ba5f0f8a0f5a75f9e330368e8b1d481829b29ca3a8bb4a"} Jan 09 14:36:23 crc kubenswrapper[4919]: I0109 14:36:23.396021 4919 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 14:36:25 crc kubenswrapper[4919]: I0109 14:36:25.413332 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnqsm" event={"ID":"a3036031-f449-4d1a-b01f-49c322787f0a","Type":"ContainerStarted","Data":"6ac756de5c4d0ca7cae34f41a9bbb1d055acfa77953dac0f721f46d3e5f36d8e"} Jan 09 14:36:26 crc kubenswrapper[4919]: I0109 14:36:26.436336 4919 generic.go:334] "Generic (PLEG): container finished" podID="a3036031-f449-4d1a-b01f-49c322787f0a" containerID="6ac756de5c4d0ca7cae34f41a9bbb1d055acfa77953dac0f721f46d3e5f36d8e" exitCode=0 Jan 09 14:36:26 crc kubenswrapper[4919]: I0109 14:36:26.436450 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnqsm" event={"ID":"a3036031-f449-4d1a-b01f-49c322787f0a","Type":"ContainerDied","Data":"6ac756de5c4d0ca7cae34f41a9bbb1d055acfa77953dac0f721f46d3e5f36d8e"} Jan 09 14:36:27 crc kubenswrapper[4919]: I0109 14:36:27.449405 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnqsm" event={"ID":"a3036031-f449-4d1a-b01f-49c322787f0a","Type":"ContainerStarted","Data":"495dcb7ab02b946b8806fb6d39026339b508e6dbfa43d18bf0b3fcd50715cd98"} Jan 09 14:36:27 crc kubenswrapper[4919]: I0109 14:36:27.471044 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hnqsm" podStartSLOduration=2.784767971 podStartE2EDuration="6.471020562s" podCreationTimestamp="2026-01-09 14:36:21 +0000 UTC" firstStartedPulling="2026-01-09 14:36:23.395793084 +0000 UTC m=+3962.943632534" lastFinishedPulling="2026-01-09 14:36:27.082045675 +0000 UTC m=+3966.629885125" observedRunningTime="2026-01-09 14:36:27.467148376 +0000 UTC m=+3967.014987826" watchObservedRunningTime="2026-01-09 14:36:27.471020562 +0000 UTC m=+3967.018860022" Jan 09 14:36:31 crc kubenswrapper[4919]: I0109 14:36:31.774564 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:31 crc kubenswrapper[4919]: I0109 14:36:31.775173 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:32 crc kubenswrapper[4919]: I0109 14:36:32.830901 4919 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnqsm" podUID="a3036031-f449-4d1a-b01f-49c322787f0a" containerName="registry-server" probeResult="failure" output=< Jan 09 14:36:32 crc kubenswrapper[4919]: timeout: failed to connect service ":50051" within 1s Jan 09 14:36:32 crc kubenswrapper[4919]: > Jan 09 14:36:34 crc kubenswrapper[4919]: I0109 14:36:34.752484 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:36:35 crc kubenswrapper[4919]: I0109 14:36:35.530939 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"456c10831c34b2c9f72f13b4cefc21b45edbed334ca26a0379edf4ef17a9749a"} Jan 09 14:36:42 crc kubenswrapper[4919]: I0109 14:36:42.228980 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:42 crc kubenswrapper[4919]: I0109 14:36:42.282291 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:42 crc kubenswrapper[4919]: I0109 14:36:42.470879 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hnqsm"] Jan 09 14:36:43 crc kubenswrapper[4919]: I0109 14:36:43.612049 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hnqsm" podUID="a3036031-f449-4d1a-b01f-49c322787f0a" containerName="registry-server" containerID="cri-o://495dcb7ab02b946b8806fb6d39026339b508e6dbfa43d18bf0b3fcd50715cd98" gracePeriod=2 Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.623663 4919 generic.go:334] "Generic (PLEG): container finished" podID="a3036031-f449-4d1a-b01f-49c322787f0a" containerID="495dcb7ab02b946b8806fb6d39026339b508e6dbfa43d18bf0b3fcd50715cd98" exitCode=0 Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.623719 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnqsm" event={"ID":"a3036031-f449-4d1a-b01f-49c322787f0a","Type":"ContainerDied","Data":"495dcb7ab02b946b8806fb6d39026339b508e6dbfa43d18bf0b3fcd50715cd98"} Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.623757 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnqsm" event={"ID":"a3036031-f449-4d1a-b01f-49c322787f0a","Type":"ContainerDied","Data":"e42baa1055ab560a0c5ede33afa58c0cf0f098a35617cab52554313bf0a141c2"} Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.623773 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e42baa1055ab560a0c5ede33afa58c0cf0f098a35617cab52554313bf0a141c2" Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.680624 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.850775 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-utilities\") pod \"a3036031-f449-4d1a-b01f-49c322787f0a\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.851350 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk8qg\" (UniqueName: \"kubernetes.io/projected/a3036031-f449-4d1a-b01f-49c322787f0a-kube-api-access-kk8qg\") pod \"a3036031-f449-4d1a-b01f-49c322787f0a\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.851422 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-catalog-content\") pod \"a3036031-f449-4d1a-b01f-49c322787f0a\" (UID: \"a3036031-f449-4d1a-b01f-49c322787f0a\") " Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.852062 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-utilities" (OuterVolumeSpecName: "utilities") pod "a3036031-f449-4d1a-b01f-49c322787f0a" (UID: "a3036031-f449-4d1a-b01f-49c322787f0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.856870 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3036031-f449-4d1a-b01f-49c322787f0a-kube-api-access-kk8qg" (OuterVolumeSpecName: "kube-api-access-kk8qg") pod "a3036031-f449-4d1a-b01f-49c322787f0a" (UID: "a3036031-f449-4d1a-b01f-49c322787f0a"). InnerVolumeSpecName "kube-api-access-kk8qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.865670 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk8qg\" (UniqueName: \"kubernetes.io/projected/a3036031-f449-4d1a-b01f-49c322787f0a-kube-api-access-kk8qg\") on node \"crc\" DevicePath \"\"" Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.865696 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:36:44 crc kubenswrapper[4919]: I0109 14:36:44.997301 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3036031-f449-4d1a-b01f-49c322787f0a" (UID: "a3036031-f449-4d1a-b01f-49c322787f0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:36:45 crc kubenswrapper[4919]: I0109 14:36:45.069190 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3036031-f449-4d1a-b01f-49c322787f0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:36:45 crc kubenswrapper[4919]: I0109 14:36:45.635577 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnqsm" Jan 09 14:36:45 crc kubenswrapper[4919]: I0109 14:36:45.676945 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hnqsm"] Jan 09 14:36:45 crc kubenswrapper[4919]: I0109 14:36:45.686072 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hnqsm"] Jan 09 14:36:46 crc kubenswrapper[4919]: I0109 14:36:46.762347 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3036031-f449-4d1a-b01f-49c322787f0a" path="/var/lib/kubelet/pods/a3036031-f449-4d1a-b01f-49c322787f0a/volumes" Jan 09 14:37:43 crc kubenswrapper[4919]: I0109 14:37:43.163493 4919 generic.go:334] "Generic (PLEG): container finished" podID="0004f8c6-daac-4060-9f51-eadc76d135ec" containerID="fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87" exitCode=0 Jan 09 14:37:43 crc kubenswrapper[4919]: I0109 14:37:43.163576 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4cczb/must-gather-q7b49" event={"ID":"0004f8c6-daac-4060-9f51-eadc76d135ec","Type":"ContainerDied","Data":"fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87"} Jan 09 14:37:43 crc kubenswrapper[4919]: I0109 14:37:43.164767 4919 scope.go:117] "RemoveContainer" containerID="fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87" Jan 09 14:37:43 crc kubenswrapper[4919]: I0109 14:37:43.330544 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4cczb_must-gather-q7b49_0004f8c6-daac-4060-9f51-eadc76d135ec/gather/0.log" Jan 09 14:37:52 crc kubenswrapper[4919]: I0109 14:37:52.267638 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4cczb/must-gather-q7b49"] Jan 09 14:37:52 crc kubenswrapper[4919]: I0109 14:37:52.268683 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4cczb/must-gather-q7b49" podUID="0004f8c6-daac-4060-9f51-eadc76d135ec" containerName="copy" containerID="cri-o://411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc" gracePeriod=2 Jan 09 14:37:52 crc kubenswrapper[4919]: I0109 14:37:52.279641 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4cczb/must-gather-q7b49"] Jan 09 14:37:52 crc kubenswrapper[4919]: I0109 14:37:52.792780 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4cczb_must-gather-q7b49_0004f8c6-daac-4060-9f51-eadc76d135ec/copy/0.log" Jan 09 14:37:52 crc kubenswrapper[4919]: I0109 14:37:52.793918 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/must-gather-q7b49" Jan 09 14:37:52 crc kubenswrapper[4919]: I0109 14:37:52.898173 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm8c6\" (UniqueName: \"kubernetes.io/projected/0004f8c6-daac-4060-9f51-eadc76d135ec-kube-api-access-zm8c6\") pod \"0004f8c6-daac-4060-9f51-eadc76d135ec\" (UID: \"0004f8c6-daac-4060-9f51-eadc76d135ec\") " Jan 09 14:37:52 crc kubenswrapper[4919]: I0109 14:37:52.898305 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0004f8c6-daac-4060-9f51-eadc76d135ec-must-gather-output\") pod \"0004f8c6-daac-4060-9f51-eadc76d135ec\" (UID: \"0004f8c6-daac-4060-9f51-eadc76d135ec\") " Jan 09 14:37:52 crc kubenswrapper[4919]: I0109 14:37:52.905321 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0004f8c6-daac-4060-9f51-eadc76d135ec-kube-api-access-zm8c6" (OuterVolumeSpecName: "kube-api-access-zm8c6") pod "0004f8c6-daac-4060-9f51-eadc76d135ec" (UID: "0004f8c6-daac-4060-9f51-eadc76d135ec"). InnerVolumeSpecName "kube-api-access-zm8c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.000643 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm8c6\" (UniqueName: \"kubernetes.io/projected/0004f8c6-daac-4060-9f51-eadc76d135ec-kube-api-access-zm8c6\") on node \"crc\" DevicePath \"\"" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.078171 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0004f8c6-daac-4060-9f51-eadc76d135ec-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0004f8c6-daac-4060-9f51-eadc76d135ec" (UID: "0004f8c6-daac-4060-9f51-eadc76d135ec"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.102725 4919 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0004f8c6-daac-4060-9f51-eadc76d135ec-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.293084 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4cczb_must-gather-q7b49_0004f8c6-daac-4060-9f51-eadc76d135ec/copy/0.log" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.293839 4919 generic.go:334] "Generic (PLEG): container finished" podID="0004f8c6-daac-4060-9f51-eadc76d135ec" containerID="411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc" exitCode=143 Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.293885 4919 scope.go:117] "RemoveContainer" containerID="411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.293900 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4cczb/must-gather-q7b49" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.321474 4919 scope.go:117] "RemoveContainer" containerID="fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.376794 4919 scope.go:117] "RemoveContainer" containerID="411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc" Jan 09 14:37:53 crc kubenswrapper[4919]: E0109 14:37:53.377358 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc\": container with ID starting with 411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc not found: ID does not exist" containerID="411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.377400 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc"} err="failed to get container status \"411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc\": rpc error: code = NotFound desc = could not find container \"411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc\": container with ID starting with 411a1cb5ab5550d42f5f372dcd223cd7a720cdcf51d2b8634aa29bae3b0fe7bc not found: ID does not exist" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.377422 4919 scope.go:117] "RemoveContainer" containerID="fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87" Jan 09 14:37:53 crc kubenswrapper[4919]: E0109 14:37:53.377753 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87\": container with ID starting with fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87 not found: ID does not exist" containerID="fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87" Jan 09 14:37:53 crc kubenswrapper[4919]: I0109 14:37:53.377773 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87"} err="failed to get container status \"fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87\": rpc error: code = NotFound desc = could not find container \"fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87\": container with ID starting with fb19b6caebb2deb6f5270a280ed227eac803965e27afcff1e509ea9c4e153b87 not found: ID does not exist" Jan 09 14:37:54 crc kubenswrapper[4919]: I0109 14:37:54.763172 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0004f8c6-daac-4060-9f51-eadc76d135ec" path="/var/lib/kubelet/pods/0004f8c6-daac-4060-9f51-eadc76d135ec/volumes" Jan 09 14:38:44 crc kubenswrapper[4919]: I0109 14:38:44.407799 4919 scope.go:117] "RemoveContainer" containerID="7c2653584bcb8239e71f15a2bb45daeb399376fec3a44a637f0e8e6a51677c41" Jan 09 14:38:51 crc kubenswrapper[4919]: I0109 14:38:51.247542 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:38:51 crc kubenswrapper[4919]: I0109 14:38:51.248299 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:39:21 crc kubenswrapper[4919]: I0109 14:39:21.247595 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:39:21 crc kubenswrapper[4919]: I0109 14:39:21.248128 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:39:51 crc kubenswrapper[4919]: I0109 14:39:51.246617 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:39:51 crc kubenswrapper[4919]: I0109 14:39:51.248167 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:39:51 crc kubenswrapper[4919]: I0109 14:39:51.248233 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 14:39:51 crc kubenswrapper[4919]: I0109 14:39:51.248935 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"456c10831c34b2c9f72f13b4cefc21b45edbed334ca26a0379edf4ef17a9749a"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 14:39:51 crc kubenswrapper[4919]: I0109 14:39:51.248990 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://456c10831c34b2c9f72f13b4cefc21b45edbed334ca26a0379edf4ef17a9749a" gracePeriod=600 Jan 09 14:39:51 crc kubenswrapper[4919]: I0109 14:39:51.379464 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="456c10831c34b2c9f72f13b4cefc21b45edbed334ca26a0379edf4ef17a9749a" exitCode=0 Jan 09 14:39:51 crc kubenswrapper[4919]: I0109 14:39:51.379546 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"456c10831c34b2c9f72f13b4cefc21b45edbed334ca26a0379edf4ef17a9749a"} Jan 09 14:39:51 crc kubenswrapper[4919]: I0109 14:39:51.379724 4919 scope.go:117] "RemoveContainer" containerID="b0270c7592076cfaccb37df0ef0dacbcd42484b5f4acbc6f9946e5c994321a3e" Jan 09 14:39:52 crc kubenswrapper[4919]: I0109 14:39:52.390256 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45"} Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.841251 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jp68p"] Jan 09 14:40:36 crc kubenswrapper[4919]: E0109 14:40:36.845611 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0004f8c6-daac-4060-9f51-eadc76d135ec" containerName="gather" Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.845763 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0004f8c6-daac-4060-9f51-eadc76d135ec" containerName="gather" Jan 09 14:40:36 crc kubenswrapper[4919]: E0109 14:40:36.845880 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0004f8c6-daac-4060-9f51-eadc76d135ec" containerName="copy" Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.845957 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="0004f8c6-daac-4060-9f51-eadc76d135ec" containerName="copy" Jan 09 14:40:36 crc kubenswrapper[4919]: E0109 14:40:36.846211 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3036031-f449-4d1a-b01f-49c322787f0a" containerName="registry-server" Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.846320 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3036031-f449-4d1a-b01f-49c322787f0a" containerName="registry-server" Jan 09 14:40:36 crc kubenswrapper[4919]: E0109 14:40:36.846422 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3036031-f449-4d1a-b01f-49c322787f0a" containerName="extract-utilities" Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.846502 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3036031-f449-4d1a-b01f-49c322787f0a" containerName="extract-utilities" Jan 09 14:40:36 crc kubenswrapper[4919]: E0109 14:40:36.846598 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3036031-f449-4d1a-b01f-49c322787f0a" containerName="extract-content" Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.846680 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3036031-f449-4d1a-b01f-49c322787f0a" containerName="extract-content" Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.847076 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0004f8c6-daac-4060-9f51-eadc76d135ec" containerName="gather" Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.847205 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3036031-f449-4d1a-b01f-49c322787f0a" containerName="registry-server" Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.847320 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="0004f8c6-daac-4060-9f51-eadc76d135ec" containerName="copy" Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.849263 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:36 crc kubenswrapper[4919]: I0109 14:40:36.851700 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jp68p"] Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.011313 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jjrm\" (UniqueName: \"kubernetes.io/projected/10963f75-73e7-4657-9d56-c0330dcaeb81-kube-api-access-4jjrm\") pod \"redhat-marketplace-jp68p\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.011447 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-catalog-content\") pod \"redhat-marketplace-jp68p\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.011761 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-utilities\") pod \"redhat-marketplace-jp68p\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.113540 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-utilities\") pod \"redhat-marketplace-jp68p\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.113590 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jjrm\" (UniqueName: \"kubernetes.io/projected/10963f75-73e7-4657-9d56-c0330dcaeb81-kube-api-access-4jjrm\") pod \"redhat-marketplace-jp68p\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.113647 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-catalog-content\") pod \"redhat-marketplace-jp68p\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.114226 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-utilities\") pod \"redhat-marketplace-jp68p\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.114287 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-catalog-content\") pod \"redhat-marketplace-jp68p\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.135693 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jjrm\" (UniqueName: \"kubernetes.io/projected/10963f75-73e7-4657-9d56-c0330dcaeb81-kube-api-access-4jjrm\") pod \"redhat-marketplace-jp68p\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.170011 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.685910 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jp68p"] Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.904669 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jp68p" event={"ID":"10963f75-73e7-4657-9d56-c0330dcaeb81","Type":"ContainerStarted","Data":"db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5"} Jan 09 14:40:37 crc kubenswrapper[4919]: I0109 14:40:37.904944 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jp68p" event={"ID":"10963f75-73e7-4657-9d56-c0330dcaeb81","Type":"ContainerStarted","Data":"48f437d106f9831f503c0c6461c5b8f4b94707bf2a6c41a26477523bcde2e951"} Jan 09 14:40:38 crc kubenswrapper[4919]: I0109 14:40:38.935084 4919 generic.go:334] "Generic (PLEG): container finished" podID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerID="db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5" exitCode=0 Jan 09 14:40:38 crc kubenswrapper[4919]: I0109 14:40:38.935367 4919 generic.go:334] "Generic (PLEG): container finished" podID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerID="13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20" exitCode=0 Jan 09 14:40:38 crc kubenswrapper[4919]: I0109 14:40:38.935391 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jp68p" event={"ID":"10963f75-73e7-4657-9d56-c0330dcaeb81","Type":"ContainerDied","Data":"db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5"} Jan 09 14:40:38 crc kubenswrapper[4919]: I0109 14:40:38.935420 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jp68p" event={"ID":"10963f75-73e7-4657-9d56-c0330dcaeb81","Type":"ContainerDied","Data":"13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20"} Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.235282 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2fsqc"] Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.237895 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.243615 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2fsqc"] Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.375474 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-utilities\") pod \"community-operators-2fsqc\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.376341 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d82qn\" (UniqueName: \"kubernetes.io/projected/c170e063-659f-4b24-88a1-46b74b52e3d1-kube-api-access-d82qn\") pod \"community-operators-2fsqc\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.376529 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-catalog-content\") pod \"community-operators-2fsqc\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.479033 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d82qn\" (UniqueName: \"kubernetes.io/projected/c170e063-659f-4b24-88a1-46b74b52e3d1-kube-api-access-d82qn\") pod \"community-operators-2fsqc\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.479524 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-catalog-content\") pod \"community-operators-2fsqc\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.479758 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-utilities\") pod \"community-operators-2fsqc\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.480088 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-catalog-content\") pod \"community-operators-2fsqc\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.480596 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-utilities\") pod \"community-operators-2fsqc\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.502177 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d82qn\" (UniqueName: \"kubernetes.io/projected/c170e063-659f-4b24-88a1-46b74b52e3d1-kube-api-access-d82qn\") pod \"community-operators-2fsqc\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.571589 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.949455 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jp68p" event={"ID":"10963f75-73e7-4657-9d56-c0330dcaeb81","Type":"ContainerStarted","Data":"d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339"} Jan 09 14:40:39 crc kubenswrapper[4919]: I0109 14:40:39.972754 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jp68p" podStartSLOduration=2.508629291 podStartE2EDuration="3.972731326s" podCreationTimestamp="2026-01-09 14:40:36 +0000 UTC" firstStartedPulling="2026-01-09 14:40:37.906621714 +0000 UTC m=+4217.454461164" lastFinishedPulling="2026-01-09 14:40:39.370723749 +0000 UTC m=+4218.918563199" observedRunningTime="2026-01-09 14:40:39.963734192 +0000 UTC m=+4219.511573652" watchObservedRunningTime="2026-01-09 14:40:39.972731326 +0000 UTC m=+4219.520570776" Jan 09 14:40:40 crc kubenswrapper[4919]: I0109 14:40:40.094799 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2fsqc"] Jan 09 14:40:40 crc kubenswrapper[4919]: I0109 14:40:40.960197 4919 generic.go:334] "Generic (PLEG): container finished" podID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerID="ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274" exitCode=0 Jan 09 14:40:40 crc kubenswrapper[4919]: I0109 14:40:40.960296 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fsqc" event={"ID":"c170e063-659f-4b24-88a1-46b74b52e3d1","Type":"ContainerDied","Data":"ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274"} Jan 09 14:40:40 crc kubenswrapper[4919]: I0109 14:40:40.960581 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fsqc" event={"ID":"c170e063-659f-4b24-88a1-46b74b52e3d1","Type":"ContainerStarted","Data":"6701fe8689a0e879f91a469d7dee3d118b6d841fed66dd5e54c280db46d31cd4"} Jan 09 14:40:41 crc kubenswrapper[4919]: I0109 14:40:41.971825 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fsqc" event={"ID":"c170e063-659f-4b24-88a1-46b74b52e3d1","Type":"ContainerStarted","Data":"59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c"} Jan 09 14:40:42 crc kubenswrapper[4919]: I0109 14:40:42.987683 4919 generic.go:334] "Generic (PLEG): container finished" podID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerID="59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c" exitCode=0 Jan 09 14:40:42 crc kubenswrapper[4919]: I0109 14:40:42.987746 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fsqc" event={"ID":"c170e063-659f-4b24-88a1-46b74b52e3d1","Type":"ContainerDied","Data":"59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c"} Jan 09 14:40:44 crc kubenswrapper[4919]: I0109 14:40:43.999459 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fsqc" event={"ID":"c170e063-659f-4b24-88a1-46b74b52e3d1","Type":"ContainerStarted","Data":"9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2"} Jan 09 14:40:44 crc kubenswrapper[4919]: I0109 14:40:44.026417 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2fsqc" podStartSLOduration=2.593236922 podStartE2EDuration="5.026393667s" podCreationTimestamp="2026-01-09 14:40:39 +0000 UTC" firstStartedPulling="2026-01-09 14:40:40.964008628 +0000 UTC m=+4220.511848068" lastFinishedPulling="2026-01-09 14:40:43.397165373 +0000 UTC m=+4222.945004813" observedRunningTime="2026-01-09 14:40:44.017333002 +0000 UTC m=+4223.565172462" watchObservedRunningTime="2026-01-09 14:40:44.026393667 +0000 UTC m=+4223.574233137" Jan 09 14:40:47 crc kubenswrapper[4919]: I0109 14:40:47.170745 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:47 crc kubenswrapper[4919]: I0109 14:40:47.171182 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:47 crc kubenswrapper[4919]: I0109 14:40:47.213947 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:48 crc kubenswrapper[4919]: I0109 14:40:48.074535 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:48 crc kubenswrapper[4919]: I0109 14:40:48.117475 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jp68p"] Jan 09 14:40:49 crc kubenswrapper[4919]: I0109 14:40:49.571906 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:49 crc kubenswrapper[4919]: I0109 14:40:49.572362 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:49 crc kubenswrapper[4919]: I0109 14:40:49.624232 4919 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:50 crc kubenswrapper[4919]: I0109 14:40:50.054307 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jp68p" podUID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerName="registry-server" containerID="cri-o://d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339" gracePeriod=2 Jan 09 14:40:50 crc kubenswrapper[4919]: I0109 14:40:50.107848 4919 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:50 crc kubenswrapper[4919]: I0109 14:40:50.998048 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.041177 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2fsqc"] Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.067316 4919 generic.go:334] "Generic (PLEG): container finished" podID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerID="d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339" exitCode=0 Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.067511 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jp68p" event={"ID":"10963f75-73e7-4657-9d56-c0330dcaeb81","Type":"ContainerDied","Data":"d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339"} Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.067686 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jp68p" event={"ID":"10963f75-73e7-4657-9d56-c0330dcaeb81","Type":"ContainerDied","Data":"48f437d106f9831f503c0c6461c5b8f4b94707bf2a6c41a26477523bcde2e951"} Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.067703 4919 scope.go:117] "RemoveContainer" containerID="d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.067574 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jp68p" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.091360 4919 scope.go:117] "RemoveContainer" containerID="13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.098777 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-utilities\") pod \"10963f75-73e7-4657-9d56-c0330dcaeb81\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.098985 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-catalog-content\") pod \"10963f75-73e7-4657-9d56-c0330dcaeb81\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.099167 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jjrm\" (UniqueName: \"kubernetes.io/projected/10963f75-73e7-4657-9d56-c0330dcaeb81-kube-api-access-4jjrm\") pod \"10963f75-73e7-4657-9d56-c0330dcaeb81\" (UID: \"10963f75-73e7-4657-9d56-c0330dcaeb81\") " Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.106974 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-utilities" (OuterVolumeSpecName: "utilities") pod "10963f75-73e7-4657-9d56-c0330dcaeb81" (UID: "10963f75-73e7-4657-9d56-c0330dcaeb81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.122035 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10963f75-73e7-4657-9d56-c0330dcaeb81-kube-api-access-4jjrm" (OuterVolumeSpecName: "kube-api-access-4jjrm") pod "10963f75-73e7-4657-9d56-c0330dcaeb81" (UID: "10963f75-73e7-4657-9d56-c0330dcaeb81"). InnerVolumeSpecName "kube-api-access-4jjrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.124024 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10963f75-73e7-4657-9d56-c0330dcaeb81" (UID: "10963f75-73e7-4657-9d56-c0330dcaeb81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.139978 4919 scope.go:117] "RemoveContainer" containerID="db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.202059 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jjrm\" (UniqueName: \"kubernetes.io/projected/10963f75-73e7-4657-9d56-c0330dcaeb81-kube-api-access-4jjrm\") on node \"crc\" DevicePath \"\"" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.202091 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.202100 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10963f75-73e7-4657-9d56-c0330dcaeb81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.204061 4919 scope.go:117] "RemoveContainer" containerID="d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339" Jan 09 14:40:51 crc kubenswrapper[4919]: E0109 14:40:51.204465 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339\": container with ID starting with d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339 not found: ID does not exist" containerID="d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.204497 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339"} err="failed to get container status \"d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339\": rpc error: code = NotFound desc = could not find container \"d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339\": container with ID starting with d4554e423f11a6b4ab9ce65ac1e49ae741046b33e4ec8ed7045b8244c18b6339 not found: ID does not exist" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.204524 4919 scope.go:117] "RemoveContainer" containerID="13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20" Jan 09 14:40:51 crc kubenswrapper[4919]: E0109 14:40:51.204809 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20\": container with ID starting with 13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20 not found: ID does not exist" containerID="13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.204833 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20"} err="failed to get container status \"13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20\": rpc error: code = NotFound desc = could not find container \"13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20\": container with ID starting with 13a44e230fa354262d9a6074085965c5803a078a4232e8ade6ccf35182d86f20 not found: ID does not exist" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.204855 4919 scope.go:117] "RemoveContainer" containerID="db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5" Jan 09 14:40:51 crc kubenswrapper[4919]: E0109 14:40:51.205135 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5\": container with ID starting with db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5 not found: ID does not exist" containerID="db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.205170 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5"} err="failed to get container status \"db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5\": rpc error: code = NotFound desc = could not find container \"db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5\": container with ID starting with db17494b71c8c699c7f8859975d41e3eb3ac954e96a41bc2036316c3a4cb89e5 not found: ID does not exist" Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.420525 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jp68p"] Jan 09 14:40:51 crc kubenswrapper[4919]: I0109 14:40:51.428361 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jp68p"] Jan 09 14:40:52 crc kubenswrapper[4919]: I0109 14:40:52.078224 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2fsqc" podUID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerName="registry-server" containerID="cri-o://9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2" gracePeriod=2 Jan 09 14:40:52 crc kubenswrapper[4919]: I0109 14:40:52.764128 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10963f75-73e7-4657-9d56-c0330dcaeb81" path="/var/lib/kubelet/pods/10963f75-73e7-4657-9d56-c0330dcaeb81/volumes" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.022120 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.094087 4919 generic.go:334] "Generic (PLEG): container finished" podID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerID="9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2" exitCode=0 Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.094127 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fsqc" event={"ID":"c170e063-659f-4b24-88a1-46b74b52e3d1","Type":"ContainerDied","Data":"9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2"} Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.094153 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fsqc" event={"ID":"c170e063-659f-4b24-88a1-46b74b52e3d1","Type":"ContainerDied","Data":"6701fe8689a0e879f91a469d7dee3d118b6d841fed66dd5e54c280db46d31cd4"} Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.094169 4919 scope.go:117] "RemoveContainer" containerID="9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.094303 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fsqc" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.114280 4919 scope.go:117] "RemoveContainer" containerID="59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.137237 4919 scope.go:117] "RemoveContainer" containerID="ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.141379 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-catalog-content\") pod \"c170e063-659f-4b24-88a1-46b74b52e3d1\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.141480 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-utilities\") pod \"c170e063-659f-4b24-88a1-46b74b52e3d1\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.141620 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d82qn\" (UniqueName: \"kubernetes.io/projected/c170e063-659f-4b24-88a1-46b74b52e3d1-kube-api-access-d82qn\") pod \"c170e063-659f-4b24-88a1-46b74b52e3d1\" (UID: \"c170e063-659f-4b24-88a1-46b74b52e3d1\") " Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.142679 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-utilities" (OuterVolumeSpecName: "utilities") pod "c170e063-659f-4b24-88a1-46b74b52e3d1" (UID: "c170e063-659f-4b24-88a1-46b74b52e3d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.146504 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c170e063-659f-4b24-88a1-46b74b52e3d1-kube-api-access-d82qn" (OuterVolumeSpecName: "kube-api-access-d82qn") pod "c170e063-659f-4b24-88a1-46b74b52e3d1" (UID: "c170e063-659f-4b24-88a1-46b74b52e3d1"). InnerVolumeSpecName "kube-api-access-d82qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.194971 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c170e063-659f-4b24-88a1-46b74b52e3d1" (UID: "c170e063-659f-4b24-88a1-46b74b52e3d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.217631 4919 scope.go:117] "RemoveContainer" containerID="9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2" Jan 09 14:40:53 crc kubenswrapper[4919]: E0109 14:40:53.219669 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2\": container with ID starting with 9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2 not found: ID does not exist" containerID="9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.219729 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2"} err="failed to get container status \"9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2\": rpc error: code = NotFound desc = could not find container \"9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2\": container with ID starting with 9103b5a8fbb598df21655746e8ef2ab6542beb710836cbdbe7af4f8eb28168e2 not found: ID does not exist" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.219762 4919 scope.go:117] "RemoveContainer" containerID="59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c" Jan 09 14:40:53 crc kubenswrapper[4919]: E0109 14:40:53.220308 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c\": container with ID starting with 59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c not found: ID does not exist" containerID="59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.220345 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c"} err="failed to get container status \"59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c\": rpc error: code = NotFound desc = could not find container \"59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c\": container with ID starting with 59910f4c4dc495c631c11fc679a8d6be8e9c65ecdabc1b72faabcd79e2b2156c not found: ID does not exist" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.220365 4919 scope.go:117] "RemoveContainer" containerID="ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274" Jan 09 14:40:53 crc kubenswrapper[4919]: E0109 14:40:53.222546 4919 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274\": container with ID starting with ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274 not found: ID does not exist" containerID="ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.222602 4919 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274"} err="failed to get container status \"ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274\": rpc error: code = NotFound desc = could not find container \"ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274\": container with ID starting with ab8fe6f6c00ea4c9d1029cf8d89ad76d437501020a24dfb00672d896d5347274 not found: ID does not exist" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.244269 4919 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.244299 4919 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c170e063-659f-4b24-88a1-46b74b52e3d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.244313 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d82qn\" (UniqueName: \"kubernetes.io/projected/c170e063-659f-4b24-88a1-46b74b52e3d1-kube-api-access-d82qn\") on node \"crc\" DevicePath \"\"" Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.431486 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2fsqc"] Jan 09 14:40:53 crc kubenswrapper[4919]: I0109 14:40:53.440496 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2fsqc"] Jan 09 14:40:54 crc kubenswrapper[4919]: I0109 14:40:54.767652 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c170e063-659f-4b24-88a1-46b74b52e3d1" path="/var/lib/kubelet/pods/c170e063-659f-4b24-88a1-46b74b52e3d1/volumes" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.164189 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rrd99/must-gather-b5l5g"] Jan 09 14:41:00 crc kubenswrapper[4919]: E0109 14:41:00.165146 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerName="extract-content" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.165164 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerName="extract-content" Jan 09 14:41:00 crc kubenswrapper[4919]: E0109 14:41:00.165185 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerName="extract-utilities" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.165194 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerName="extract-utilities" Jan 09 14:41:00 crc kubenswrapper[4919]: E0109 14:41:00.165230 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerName="registry-server" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.165239 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerName="registry-server" Jan 09 14:41:00 crc kubenswrapper[4919]: E0109 14:41:00.165281 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerName="extract-content" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.165288 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerName="extract-content" Jan 09 14:41:00 crc kubenswrapper[4919]: E0109 14:41:00.165299 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerName="extract-utilities" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.165305 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerName="extract-utilities" Jan 09 14:41:00 crc kubenswrapper[4919]: E0109 14:41:00.165319 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerName="registry-server" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.165324 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerName="registry-server" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.165533 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="c170e063-659f-4b24-88a1-46b74b52e3d1" containerName="registry-server" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.165572 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="10963f75-73e7-4657-9d56-c0330dcaeb81" containerName="registry-server" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.166711 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/must-gather-b5l5g" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.171344 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rrd99"/"kube-root-ca.crt" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.171397 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rrd99"/"default-dockercfg-l96vx" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.175494 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rrd99"/"openshift-service-ca.crt" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.182030 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rrd99/must-gather-b5l5g"] Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.273259 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mp6\" (UniqueName: \"kubernetes.io/projected/bf59c3da-2238-418d-ae83-1c36ed768e3b-kube-api-access-h6mp6\") pod \"must-gather-b5l5g\" (UID: \"bf59c3da-2238-418d-ae83-1c36ed768e3b\") " pod="openshift-must-gather-rrd99/must-gather-b5l5g" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.273924 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf59c3da-2238-418d-ae83-1c36ed768e3b-must-gather-output\") pod \"must-gather-b5l5g\" (UID: \"bf59c3da-2238-418d-ae83-1c36ed768e3b\") " pod="openshift-must-gather-rrd99/must-gather-b5l5g" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.375824 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf59c3da-2238-418d-ae83-1c36ed768e3b-must-gather-output\") pod \"must-gather-b5l5g\" (UID: \"bf59c3da-2238-418d-ae83-1c36ed768e3b\") " pod="openshift-must-gather-rrd99/must-gather-b5l5g" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.375968 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6mp6\" (UniqueName: \"kubernetes.io/projected/bf59c3da-2238-418d-ae83-1c36ed768e3b-kube-api-access-h6mp6\") pod \"must-gather-b5l5g\" (UID: \"bf59c3da-2238-418d-ae83-1c36ed768e3b\") " pod="openshift-must-gather-rrd99/must-gather-b5l5g" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.376540 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf59c3da-2238-418d-ae83-1c36ed768e3b-must-gather-output\") pod \"must-gather-b5l5g\" (UID: \"bf59c3da-2238-418d-ae83-1c36ed768e3b\") " pod="openshift-must-gather-rrd99/must-gather-b5l5g" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.398367 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6mp6\" (UniqueName: \"kubernetes.io/projected/bf59c3da-2238-418d-ae83-1c36ed768e3b-kube-api-access-h6mp6\") pod \"must-gather-b5l5g\" (UID: \"bf59c3da-2238-418d-ae83-1c36ed768e3b\") " pod="openshift-must-gather-rrd99/must-gather-b5l5g" Jan 09 14:41:00 crc kubenswrapper[4919]: I0109 14:41:00.485111 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/must-gather-b5l5g" Jan 09 14:41:01 crc kubenswrapper[4919]: I0109 14:41:01.680046 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rrd99/must-gather-b5l5g"] Jan 09 14:41:02 crc kubenswrapper[4919]: I0109 14:41:02.195886 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/must-gather-b5l5g" event={"ID":"bf59c3da-2238-418d-ae83-1c36ed768e3b","Type":"ContainerStarted","Data":"2540af60afc80c314bbfe6d15513ff1c8fc0a7dd8a4b75ceb6a62cc339f2dbf4"} Jan 09 14:41:02 crc kubenswrapper[4919]: I0109 14:41:02.196181 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/must-gather-b5l5g" event={"ID":"bf59c3da-2238-418d-ae83-1c36ed768e3b","Type":"ContainerStarted","Data":"252b0666425dacc2dffbf483c36cc74a69b4afba82c61acd0e3952c4fe376983"} Jan 09 14:41:02 crc kubenswrapper[4919]: I0109 14:41:02.196201 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/must-gather-b5l5g" event={"ID":"bf59c3da-2238-418d-ae83-1c36ed768e3b","Type":"ContainerStarted","Data":"528c1c26bd5b053bf0a907681b69a76dee4f49ff031a9ca14e79970da9f4b742"} Jan 09 14:41:02 crc kubenswrapper[4919]: I0109 14:41:02.212924 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rrd99/must-gather-b5l5g" podStartSLOduration=2.2128983 podStartE2EDuration="2.2128983s" podCreationTimestamp="2026-01-09 14:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 14:41:02.210192472 +0000 UTC m=+4241.758031942" watchObservedRunningTime="2026-01-09 14:41:02.2128983 +0000 UTC m=+4241.760737770" Jan 09 14:41:05 crc kubenswrapper[4919]: I0109 14:41:05.724776 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rrd99/crc-debug-g4kcw"] Jan 09 14:41:05 crc kubenswrapper[4919]: I0109 14:41:05.727359 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-g4kcw" Jan 09 14:41:05 crc kubenswrapper[4919]: I0109 14:41:05.880290 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f596729e-58c5-4367-ad24-56d0cbc2aecf-host\") pod \"crc-debug-g4kcw\" (UID: \"f596729e-58c5-4367-ad24-56d0cbc2aecf\") " pod="openshift-must-gather-rrd99/crc-debug-g4kcw" Jan 09 14:41:05 crc kubenswrapper[4919]: I0109 14:41:05.880721 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rqwc\" (UniqueName: \"kubernetes.io/projected/f596729e-58c5-4367-ad24-56d0cbc2aecf-kube-api-access-9rqwc\") pod \"crc-debug-g4kcw\" (UID: \"f596729e-58c5-4367-ad24-56d0cbc2aecf\") " pod="openshift-must-gather-rrd99/crc-debug-g4kcw" Jan 09 14:41:05 crc kubenswrapper[4919]: I0109 14:41:05.982327 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rqwc\" (UniqueName: \"kubernetes.io/projected/f596729e-58c5-4367-ad24-56d0cbc2aecf-kube-api-access-9rqwc\") pod \"crc-debug-g4kcw\" (UID: \"f596729e-58c5-4367-ad24-56d0cbc2aecf\") " pod="openshift-must-gather-rrd99/crc-debug-g4kcw" Jan 09 14:41:05 crc kubenswrapper[4919]: I0109 14:41:05.982491 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f596729e-58c5-4367-ad24-56d0cbc2aecf-host\") pod \"crc-debug-g4kcw\" (UID: \"f596729e-58c5-4367-ad24-56d0cbc2aecf\") " pod="openshift-must-gather-rrd99/crc-debug-g4kcw" Jan 09 14:41:05 crc kubenswrapper[4919]: I0109 14:41:05.982662 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f596729e-58c5-4367-ad24-56d0cbc2aecf-host\") pod \"crc-debug-g4kcw\" (UID: \"f596729e-58c5-4367-ad24-56d0cbc2aecf\") " pod="openshift-must-gather-rrd99/crc-debug-g4kcw" Jan 09 14:41:06 crc kubenswrapper[4919]: I0109 14:41:06.003202 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rqwc\" (UniqueName: \"kubernetes.io/projected/f596729e-58c5-4367-ad24-56d0cbc2aecf-kube-api-access-9rqwc\") pod \"crc-debug-g4kcw\" (UID: \"f596729e-58c5-4367-ad24-56d0cbc2aecf\") " pod="openshift-must-gather-rrd99/crc-debug-g4kcw" Jan 09 14:41:06 crc kubenswrapper[4919]: I0109 14:41:06.044488 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-g4kcw" Jan 09 14:41:06 crc kubenswrapper[4919]: W0109 14:41:06.076770 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf596729e_58c5_4367_ad24_56d0cbc2aecf.slice/crio-1e8c7845600fc0d58a7d2115fbfb9f21f372d188d4301c3bef87868d8e25f6b6 WatchSource:0}: Error finding container 1e8c7845600fc0d58a7d2115fbfb9f21f372d188d4301c3bef87868d8e25f6b6: Status 404 returned error can't find the container with id 1e8c7845600fc0d58a7d2115fbfb9f21f372d188d4301c3bef87868d8e25f6b6 Jan 09 14:41:06 crc kubenswrapper[4919]: I0109 14:41:06.237696 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/crc-debug-g4kcw" event={"ID":"f596729e-58c5-4367-ad24-56d0cbc2aecf","Type":"ContainerStarted","Data":"1e8c7845600fc0d58a7d2115fbfb9f21f372d188d4301c3bef87868d8e25f6b6"} Jan 09 14:41:07 crc kubenswrapper[4919]: I0109 14:41:07.247861 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/crc-debug-g4kcw" event={"ID":"f596729e-58c5-4367-ad24-56d0cbc2aecf","Type":"ContainerStarted","Data":"1e4f793d7c2f604f59c3a725a1c106a881557eb14d402368612c4c55fcce1390"} Jan 09 14:41:07 crc kubenswrapper[4919]: I0109 14:41:07.269586 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rrd99/crc-debug-g4kcw" podStartSLOduration=2.269562444 podStartE2EDuration="2.269562444s" podCreationTimestamp="2026-01-09 14:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 14:41:07.261319999 +0000 UTC m=+4246.809159449" watchObservedRunningTime="2026-01-09 14:41:07.269562444 +0000 UTC m=+4246.817401884" Jan 09 14:41:42 crc kubenswrapper[4919]: I0109 14:41:42.593335 4919 generic.go:334] "Generic (PLEG): container finished" podID="f596729e-58c5-4367-ad24-56d0cbc2aecf" containerID="1e4f793d7c2f604f59c3a725a1c106a881557eb14d402368612c4c55fcce1390" exitCode=0 Jan 09 14:41:42 crc kubenswrapper[4919]: I0109 14:41:42.593410 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/crc-debug-g4kcw" event={"ID":"f596729e-58c5-4367-ad24-56d0cbc2aecf","Type":"ContainerDied","Data":"1e4f793d7c2f604f59c3a725a1c106a881557eb14d402368612c4c55fcce1390"} Jan 09 14:41:43 crc kubenswrapper[4919]: I0109 14:41:43.892553 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-g4kcw" Jan 09 14:41:43 crc kubenswrapper[4919]: I0109 14:41:43.933037 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rrd99/crc-debug-g4kcw"] Jan 09 14:41:43 crc kubenswrapper[4919]: I0109 14:41:43.941853 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rrd99/crc-debug-g4kcw"] Jan 09 14:41:43 crc kubenswrapper[4919]: I0109 14:41:43.967255 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rqwc\" (UniqueName: \"kubernetes.io/projected/f596729e-58c5-4367-ad24-56d0cbc2aecf-kube-api-access-9rqwc\") pod \"f596729e-58c5-4367-ad24-56d0cbc2aecf\" (UID: \"f596729e-58c5-4367-ad24-56d0cbc2aecf\") " Jan 09 14:41:43 crc kubenswrapper[4919]: I0109 14:41:43.967363 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f596729e-58c5-4367-ad24-56d0cbc2aecf-host\") pod \"f596729e-58c5-4367-ad24-56d0cbc2aecf\" (UID: \"f596729e-58c5-4367-ad24-56d0cbc2aecf\") " Jan 09 14:41:43 crc kubenswrapper[4919]: I0109 14:41:43.967473 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f596729e-58c5-4367-ad24-56d0cbc2aecf-host" (OuterVolumeSpecName: "host") pod "f596729e-58c5-4367-ad24-56d0cbc2aecf" (UID: "f596729e-58c5-4367-ad24-56d0cbc2aecf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 14:41:43 crc kubenswrapper[4919]: I0109 14:41:43.968162 4919 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f596729e-58c5-4367-ad24-56d0cbc2aecf-host\") on node \"crc\" DevicePath \"\"" Jan 09 14:41:43 crc kubenswrapper[4919]: I0109 14:41:43.975341 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f596729e-58c5-4367-ad24-56d0cbc2aecf-kube-api-access-9rqwc" (OuterVolumeSpecName: "kube-api-access-9rqwc") pod "f596729e-58c5-4367-ad24-56d0cbc2aecf" (UID: "f596729e-58c5-4367-ad24-56d0cbc2aecf"). InnerVolumeSpecName "kube-api-access-9rqwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:41:44 crc kubenswrapper[4919]: I0109 14:41:44.069897 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rqwc\" (UniqueName: \"kubernetes.io/projected/f596729e-58c5-4367-ad24-56d0cbc2aecf-kube-api-access-9rqwc\") on node \"crc\" DevicePath \"\"" Jan 09 14:41:44 crc kubenswrapper[4919]: I0109 14:41:44.613505 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e8c7845600fc0d58a7d2115fbfb9f21f372d188d4301c3bef87868d8e25f6b6" Jan 09 14:41:44 crc kubenswrapper[4919]: I0109 14:41:44.613553 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-g4kcw" Jan 09 14:41:44 crc kubenswrapper[4919]: I0109 14:41:44.763280 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f596729e-58c5-4367-ad24-56d0cbc2aecf" path="/var/lib/kubelet/pods/f596729e-58c5-4367-ad24-56d0cbc2aecf/volumes" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.120976 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rrd99/crc-debug-9wp6c"] Jan 09 14:41:45 crc kubenswrapper[4919]: E0109 14:41:45.121841 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f596729e-58c5-4367-ad24-56d0cbc2aecf" containerName="container-00" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.121860 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f596729e-58c5-4367-ad24-56d0cbc2aecf" containerName="container-00" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.122144 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f596729e-58c5-4367-ad24-56d0cbc2aecf" containerName="container-00" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.123181 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-9wp6c" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.189399 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl2r4\" (UniqueName: \"kubernetes.io/projected/af36a974-5b16-4020-a9b2-132050e83e72-kube-api-access-cl2r4\") pod \"crc-debug-9wp6c\" (UID: \"af36a974-5b16-4020-a9b2-132050e83e72\") " pod="openshift-must-gather-rrd99/crc-debug-9wp6c" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.189583 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af36a974-5b16-4020-a9b2-132050e83e72-host\") pod \"crc-debug-9wp6c\" (UID: \"af36a974-5b16-4020-a9b2-132050e83e72\") " pod="openshift-must-gather-rrd99/crc-debug-9wp6c" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.291373 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl2r4\" (UniqueName: \"kubernetes.io/projected/af36a974-5b16-4020-a9b2-132050e83e72-kube-api-access-cl2r4\") pod \"crc-debug-9wp6c\" (UID: \"af36a974-5b16-4020-a9b2-132050e83e72\") " pod="openshift-must-gather-rrd99/crc-debug-9wp6c" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.291537 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af36a974-5b16-4020-a9b2-132050e83e72-host\") pod \"crc-debug-9wp6c\" (UID: \"af36a974-5b16-4020-a9b2-132050e83e72\") " pod="openshift-must-gather-rrd99/crc-debug-9wp6c" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.291669 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af36a974-5b16-4020-a9b2-132050e83e72-host\") pod \"crc-debug-9wp6c\" (UID: \"af36a974-5b16-4020-a9b2-132050e83e72\") " pod="openshift-must-gather-rrd99/crc-debug-9wp6c" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.307514 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl2r4\" (UniqueName: \"kubernetes.io/projected/af36a974-5b16-4020-a9b2-132050e83e72-kube-api-access-cl2r4\") pod \"crc-debug-9wp6c\" (UID: \"af36a974-5b16-4020-a9b2-132050e83e72\") " pod="openshift-must-gather-rrd99/crc-debug-9wp6c" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.441603 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-9wp6c" Jan 09 14:41:45 crc kubenswrapper[4919]: I0109 14:41:45.623974 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/crc-debug-9wp6c" event={"ID":"af36a974-5b16-4020-a9b2-132050e83e72","Type":"ContainerStarted","Data":"2ec5b7bb62efbcb2756ce5a396dc53a1cc190713d4496cf2e4f4fe52116c6d62"} Jan 09 14:41:46 crc kubenswrapper[4919]: I0109 14:41:46.635932 4919 generic.go:334] "Generic (PLEG): container finished" podID="af36a974-5b16-4020-a9b2-132050e83e72" containerID="d88482858d88751ea2cba1d8d65b3c7b8091258381b386f81dbec331638bfba1" exitCode=0 Jan 09 14:41:46 crc kubenswrapper[4919]: I0109 14:41:46.636024 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/crc-debug-9wp6c" event={"ID":"af36a974-5b16-4020-a9b2-132050e83e72","Type":"ContainerDied","Data":"d88482858d88751ea2cba1d8d65b3c7b8091258381b386f81dbec331638bfba1"} Jan 09 14:41:47 crc kubenswrapper[4919]: I0109 14:41:47.035122 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rrd99/crc-debug-9wp6c"] Jan 09 14:41:47 crc kubenswrapper[4919]: I0109 14:41:47.051374 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rrd99/crc-debug-9wp6c"] Jan 09 14:41:47 crc kubenswrapper[4919]: I0109 14:41:47.776928 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-9wp6c" Jan 09 14:41:47 crc kubenswrapper[4919]: I0109 14:41:47.950569 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af36a974-5b16-4020-a9b2-132050e83e72-host\") pod \"af36a974-5b16-4020-a9b2-132050e83e72\" (UID: \"af36a974-5b16-4020-a9b2-132050e83e72\") " Jan 09 14:41:47 crc kubenswrapper[4919]: I0109 14:41:47.950728 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl2r4\" (UniqueName: \"kubernetes.io/projected/af36a974-5b16-4020-a9b2-132050e83e72-kube-api-access-cl2r4\") pod \"af36a974-5b16-4020-a9b2-132050e83e72\" (UID: \"af36a974-5b16-4020-a9b2-132050e83e72\") " Jan 09 14:41:47 crc kubenswrapper[4919]: I0109 14:41:47.952121 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af36a974-5b16-4020-a9b2-132050e83e72-host" (OuterVolumeSpecName: "host") pod "af36a974-5b16-4020-a9b2-132050e83e72" (UID: "af36a974-5b16-4020-a9b2-132050e83e72"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 14:41:47 crc kubenswrapper[4919]: I0109 14:41:47.956473 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af36a974-5b16-4020-a9b2-132050e83e72-kube-api-access-cl2r4" (OuterVolumeSpecName: "kube-api-access-cl2r4") pod "af36a974-5b16-4020-a9b2-132050e83e72" (UID: "af36a974-5b16-4020-a9b2-132050e83e72"). InnerVolumeSpecName "kube-api-access-cl2r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.054582 4919 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af36a974-5b16-4020-a9b2-132050e83e72-host\") on node \"crc\" DevicePath \"\"" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.054636 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl2r4\" (UniqueName: \"kubernetes.io/projected/af36a974-5b16-4020-a9b2-132050e83e72-kube-api-access-cl2r4\") on node \"crc\" DevicePath \"\"" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.255722 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rrd99/crc-debug-j2rgc"] Jan 09 14:41:48 crc kubenswrapper[4919]: E0109 14:41:48.256164 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af36a974-5b16-4020-a9b2-132050e83e72" containerName="container-00" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.256183 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="af36a974-5b16-4020-a9b2-132050e83e72" containerName="container-00" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.256384 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="af36a974-5b16-4020-a9b2-132050e83e72" containerName="container-00" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.257059 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-j2rgc" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.360459 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr98r\" (UniqueName: \"kubernetes.io/projected/f6d76415-2015-484e-96e2-151388085415-kube-api-access-pr98r\") pod \"crc-debug-j2rgc\" (UID: \"f6d76415-2015-484e-96e2-151388085415\") " pod="openshift-must-gather-rrd99/crc-debug-j2rgc" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.360791 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6d76415-2015-484e-96e2-151388085415-host\") pod \"crc-debug-j2rgc\" (UID: \"f6d76415-2015-484e-96e2-151388085415\") " pod="openshift-must-gather-rrd99/crc-debug-j2rgc" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.463299 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6d76415-2015-484e-96e2-151388085415-host\") pod \"crc-debug-j2rgc\" (UID: \"f6d76415-2015-484e-96e2-151388085415\") " pod="openshift-must-gather-rrd99/crc-debug-j2rgc" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.463407 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr98r\" (UniqueName: \"kubernetes.io/projected/f6d76415-2015-484e-96e2-151388085415-kube-api-access-pr98r\") pod \"crc-debug-j2rgc\" (UID: \"f6d76415-2015-484e-96e2-151388085415\") " pod="openshift-must-gather-rrd99/crc-debug-j2rgc" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.463487 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6d76415-2015-484e-96e2-151388085415-host\") pod \"crc-debug-j2rgc\" (UID: \"f6d76415-2015-484e-96e2-151388085415\") " pod="openshift-must-gather-rrd99/crc-debug-j2rgc" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.491172 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr98r\" (UniqueName: \"kubernetes.io/projected/f6d76415-2015-484e-96e2-151388085415-kube-api-access-pr98r\") pod \"crc-debug-j2rgc\" (UID: \"f6d76415-2015-484e-96e2-151388085415\") " pod="openshift-must-gather-rrd99/crc-debug-j2rgc" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.575655 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-j2rgc" Jan 09 14:41:48 crc kubenswrapper[4919]: W0109 14:41:48.606456 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6d76415_2015_484e_96e2_151388085415.slice/crio-9038df94109889b3cc67dfd9586a98222e7db2e9ef4c909bdb59845271c09737 WatchSource:0}: Error finding container 9038df94109889b3cc67dfd9586a98222e7db2e9ef4c909bdb59845271c09737: Status 404 returned error can't find the container with id 9038df94109889b3cc67dfd9586a98222e7db2e9ef4c909bdb59845271c09737 Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.668772 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/crc-debug-j2rgc" event={"ID":"f6d76415-2015-484e-96e2-151388085415","Type":"ContainerStarted","Data":"9038df94109889b3cc67dfd9586a98222e7db2e9ef4c909bdb59845271c09737"} Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.670352 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ec5b7bb62efbcb2756ce5a396dc53a1cc190713d4496cf2e4f4fe52116c6d62" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.670435 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-9wp6c" Jan 09 14:41:48 crc kubenswrapper[4919]: I0109 14:41:48.762700 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af36a974-5b16-4020-a9b2-132050e83e72" path="/var/lib/kubelet/pods/af36a974-5b16-4020-a9b2-132050e83e72/volumes" Jan 09 14:41:49 crc kubenswrapper[4919]: I0109 14:41:49.682583 4919 generic.go:334] "Generic (PLEG): container finished" podID="f6d76415-2015-484e-96e2-151388085415" containerID="0be7b3c170121cf52c6be4c31bc3c480189c3ee50b05972749aa1e9fa43db2f7" exitCode=0 Jan 09 14:41:49 crc kubenswrapper[4919]: I0109 14:41:49.683969 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/crc-debug-j2rgc" event={"ID":"f6d76415-2015-484e-96e2-151388085415","Type":"ContainerDied","Data":"0be7b3c170121cf52c6be4c31bc3c480189c3ee50b05972749aa1e9fa43db2f7"} Jan 09 14:41:49 crc kubenswrapper[4919]: I0109 14:41:49.724223 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rrd99/crc-debug-j2rgc"] Jan 09 14:41:49 crc kubenswrapper[4919]: I0109 14:41:49.741102 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rrd99/crc-debug-j2rgc"] Jan 09 14:41:50 crc kubenswrapper[4919]: I0109 14:41:50.814540 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-j2rgc" Jan 09 14:41:50 crc kubenswrapper[4919]: I0109 14:41:50.908771 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6d76415-2015-484e-96e2-151388085415-host\") pod \"f6d76415-2015-484e-96e2-151388085415\" (UID: \"f6d76415-2015-484e-96e2-151388085415\") " Jan 09 14:41:50 crc kubenswrapper[4919]: I0109 14:41:50.909385 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr98r\" (UniqueName: \"kubernetes.io/projected/f6d76415-2015-484e-96e2-151388085415-kube-api-access-pr98r\") pod \"f6d76415-2015-484e-96e2-151388085415\" (UID: \"f6d76415-2015-484e-96e2-151388085415\") " Jan 09 14:41:50 crc kubenswrapper[4919]: I0109 14:41:50.908933 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6d76415-2015-484e-96e2-151388085415-host" (OuterVolumeSpecName: "host") pod "f6d76415-2015-484e-96e2-151388085415" (UID: "f6d76415-2015-484e-96e2-151388085415"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 14:41:50 crc kubenswrapper[4919]: I0109 14:41:50.918172 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6d76415-2015-484e-96e2-151388085415-kube-api-access-pr98r" (OuterVolumeSpecName: "kube-api-access-pr98r") pod "f6d76415-2015-484e-96e2-151388085415" (UID: "f6d76415-2015-484e-96e2-151388085415"). InnerVolumeSpecName "kube-api-access-pr98r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:41:51 crc kubenswrapper[4919]: I0109 14:41:51.012000 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr98r\" (UniqueName: \"kubernetes.io/projected/f6d76415-2015-484e-96e2-151388085415-kube-api-access-pr98r\") on node \"crc\" DevicePath \"\"" Jan 09 14:41:51 crc kubenswrapper[4919]: I0109 14:41:51.012041 4919 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6d76415-2015-484e-96e2-151388085415-host\") on node \"crc\" DevicePath \"\"" Jan 09 14:41:51 crc kubenswrapper[4919]: I0109 14:41:51.246742 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:41:51 crc kubenswrapper[4919]: I0109 14:41:51.246802 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:41:51 crc kubenswrapper[4919]: I0109 14:41:51.704819 4919 scope.go:117] "RemoveContainer" containerID="0be7b3c170121cf52c6be4c31bc3c480189c3ee50b05972749aa1e9fa43db2f7" Jan 09 14:41:51 crc kubenswrapper[4919]: I0109 14:41:51.704867 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/crc-debug-j2rgc" Jan 09 14:41:52 crc kubenswrapper[4919]: I0109 14:41:52.763419 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6d76415-2015-484e-96e2-151388085415" path="/var/lib/kubelet/pods/f6d76415-2015-484e-96e2-151388085415/volumes" Jan 09 14:42:21 crc kubenswrapper[4919]: I0109 14:42:21.247191 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:42:21 crc kubenswrapper[4919]: I0109 14:42:21.247659 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:42:22 crc kubenswrapper[4919]: I0109 14:42:22.586488 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56f5497b64-ws7gk_f23efa08-cf06-4a61-a081-60b52efe8e8f/barbican-api/0.log" Jan 09 14:42:22 crc kubenswrapper[4919]: I0109 14:42:22.637759 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56f5497b64-ws7gk_f23efa08-cf06-4a61-a081-60b52efe8e8f/barbican-api-log/0.log" Jan 09 14:42:22 crc kubenswrapper[4919]: I0109 14:42:22.796912 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bc67fd74-frwbh_be2245b9-76ae-4599-ba6a-97e327453f95/barbican-keystone-listener/0.log" Jan 09 14:42:22 crc kubenswrapper[4919]: I0109 14:42:22.825972 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bc67fd74-frwbh_be2245b9-76ae-4599-ba6a-97e327453f95/barbican-keystone-listener-log/0.log" Jan 09 14:42:22 crc kubenswrapper[4919]: I0109 14:42:22.905932 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d7884df69-vfc9g_15fcc721-300d-4084-9fbe-756903a4f58b/barbican-worker/0.log" Jan 09 14:42:23 crc kubenswrapper[4919]: I0109 14:42:23.030766 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d7884df69-vfc9g_15fcc721-300d-4084-9fbe-756903a4f58b/barbican-worker-log/0.log" Jan 09 14:42:23 crc kubenswrapper[4919]: I0109 14:42:23.229200 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-k54b2_2e1540e3-6358-48ae-ac2a-08e90ab54cbb/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:23 crc kubenswrapper[4919]: I0109 14:42:23.295365 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c31d277-b08a-41e0-9f01-95ea17af82f4/ceilometer-central-agent/0.log" Jan 09 14:42:23 crc kubenswrapper[4919]: I0109 14:42:23.358752 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c31d277-b08a-41e0-9f01-95ea17af82f4/ceilometer-notification-agent/0.log" Jan 09 14:42:23 crc kubenswrapper[4919]: I0109 14:42:23.416774 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c31d277-b08a-41e0-9f01-95ea17af82f4/sg-core/0.log" Jan 09 14:42:23 crc kubenswrapper[4919]: I0109 14:42:23.439162 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c31d277-b08a-41e0-9f01-95ea17af82f4/proxy-httpd/0.log" Jan 09 14:42:23 crc kubenswrapper[4919]: I0109 14:42:23.962373 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0b8d4fb5-64a0-4774-8f0f-273c476d7b81/cinder-api-log/0.log" Jan 09 14:42:24 crc kubenswrapper[4919]: I0109 14:42:24.001450 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0b8d4fb5-64a0-4774-8f0f-273c476d7b81/cinder-api/0.log" Jan 09 14:42:24 crc kubenswrapper[4919]: I0109 14:42:24.192144 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9637b6f9-f7a2-4056-b9ae-87b4af7e475e/probe/0.log" Jan 09 14:42:24 crc kubenswrapper[4919]: I0109 14:42:24.252965 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9637b6f9-f7a2-4056-b9ae-87b4af7e475e/cinder-scheduler/0.log" Jan 09 14:42:24 crc kubenswrapper[4919]: I0109 14:42:24.269729 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-m8pld_eb6e93ba-4c74-4250-b8c2-fc85d52d6d3c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:24 crc kubenswrapper[4919]: I0109 14:42:24.636007 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-mbtbk_35d091b1-8210-4d82-bde9-2b14bcfb8227/init/0.log" Jan 09 14:42:24 crc kubenswrapper[4919]: I0109 14:42:24.653667 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vsw76_6dd14cc5-f2bf-43bc-b3e6-9704c2728708/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:24 crc kubenswrapper[4919]: I0109 14:42:24.867950 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-mbtbk_35d091b1-8210-4d82-bde9-2b14bcfb8227/init/0.log" Jan 09 14:42:25 crc kubenswrapper[4919]: I0109 14:42:25.015640 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-mbtbk_35d091b1-8210-4d82-bde9-2b14bcfb8227/dnsmasq-dns/0.log" Jan 09 14:42:25 crc kubenswrapper[4919]: I0109 14:42:25.069461 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gdkgc_3004c02a-530a-44c4-98b4-825dbb64296f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:25 crc kubenswrapper[4919]: I0109 14:42:25.198705 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_58571fe0-89fb-41ed-a3eb-b04d6224dd1d/glance-httpd/0.log" Jan 09 14:42:25 crc kubenswrapper[4919]: I0109 14:42:25.238907 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_58571fe0-89fb-41ed-a3eb-b04d6224dd1d/glance-log/0.log" Jan 09 14:42:25 crc kubenswrapper[4919]: I0109 14:42:25.391188 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_735040be-a013-45ef-a590-2819585ea47c/glance-httpd/0.log" Jan 09 14:42:25 crc kubenswrapper[4919]: I0109 14:42:25.398885 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_735040be-a013-45ef-a590-2819585ea47c/glance-log/0.log" Jan 09 14:42:25 crc kubenswrapper[4919]: I0109 14:42:25.585988 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75dd96cc4d-xnspb_db2aeda5-21fd-4b61-bb59-d8d0b78884c2/horizon/0.log" Jan 09 14:42:25 crc kubenswrapper[4919]: I0109 14:42:25.757819 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-sm2tt_f7e5dde7-0e67-4c31-83c6-9946c5b23755/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:25 crc kubenswrapper[4919]: I0109 14:42:25.884431 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-b9fzw_d079d443-cf8c-47ff-96d9-a3fe59583ad8/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:25 crc kubenswrapper[4919]: I0109 14:42:25.943835 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75dd96cc4d-xnspb_db2aeda5-21fd-4b61-bb59-d8d0b78884c2/horizon-log/0.log" Jan 09 14:42:26 crc kubenswrapper[4919]: I0109 14:42:26.133933 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29466121-rx8p6_e8fd615e-ac5c-4caa-8eaf-5c99df3fa111/keystone-cron/0.log" Jan 09 14:42:26 crc kubenswrapper[4919]: I0109 14:42:26.161980 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6575bd5545-2lr88_22246922-04ad-4013-a96a-71e00093dbed/keystone-api/0.log" Jan 09 14:42:26 crc kubenswrapper[4919]: I0109 14:42:26.202934 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3e1aa728-2078-4e6c-b738-0bc97b1f14ff/kube-state-metrics/0.log" Jan 09 14:42:26 crc kubenswrapper[4919]: I0109 14:42:26.378810 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-k82m6_acecffca-8dfb-4702-851a-f8dfe2659e98/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:26 crc kubenswrapper[4919]: I0109 14:42:26.712313 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-584b4bc589-6qnkd_b93b1e1b-72fa-443d-ba2c-e9c9920f918a/neutron-httpd/0.log" Jan 09 14:42:26 crc kubenswrapper[4919]: I0109 14:42:26.742969 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-584b4bc589-6qnkd_b93b1e1b-72fa-443d-ba2c-e9c9920f918a/neutron-api/0.log" Jan 09 14:42:26 crc kubenswrapper[4919]: I0109 14:42:26.780555 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ptlld_e9770e19-27d5-49ff-a358-7f455b3e6d8e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:27 crc kubenswrapper[4919]: I0109 14:42:27.295185 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a/nova-api-log/0.log" Jan 09 14:42:27 crc kubenswrapper[4919]: I0109 14:42:27.391707 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b339d912-b884-4fd0-8b93-c21c2b6ce58c/nova-cell0-conductor-conductor/0.log" Jan 09 14:42:27 crc kubenswrapper[4919]: I0109 14:42:27.732383 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8c9fed7c-6744-4cce-b80c-21ef4352ca7b/nova-cell1-conductor-conductor/0.log" Jan 09 14:42:27 crc kubenswrapper[4919]: I0109 14:42:27.751463 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4402784e-5d9b-4d52-86a8-57dc43cc2917/nova-cell1-novncproxy-novncproxy/0.log" Jan 09 14:42:27 crc kubenswrapper[4919]: I0109 14:42:27.816025 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f2f3c3f3-07a5-44a8-a7da-0bf4b180c56a/nova-api-api/0.log" Jan 09 14:42:27 crc kubenswrapper[4919]: I0109 14:42:27.985563 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-v9kt9_cb5b9fe4-6d05-4753-9557-a7c5b6f8c6a1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:28 crc kubenswrapper[4919]: I0109 14:42:28.123331 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_10d389ef-fb74-406c-a1cb-8a591b708726/nova-metadata-log/0.log" Jan 09 14:42:28 crc kubenswrapper[4919]: I0109 14:42:28.428027 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a078e997-b08e-44a9-89a7-bf2fe9eaed11/mysql-bootstrap/0.log" Jan 09 14:42:28 crc kubenswrapper[4919]: I0109 14:42:28.470265 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c8b56a5e-6bc1-4366-87e6-81d8e4b8100b/nova-scheduler-scheduler/0.log" Jan 09 14:42:28 crc kubenswrapper[4919]: I0109 14:42:28.598737 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a078e997-b08e-44a9-89a7-bf2fe9eaed11/mysql-bootstrap/0.log" Jan 09 14:42:28 crc kubenswrapper[4919]: I0109 14:42:28.621652 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a078e997-b08e-44a9-89a7-bf2fe9eaed11/galera/0.log" Jan 09 14:42:28 crc kubenswrapper[4919]: I0109 14:42:28.809480 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3d0c2080-b1ea-4ff9-ad51-d970cce81d56/mysql-bootstrap/0.log" Jan 09 14:42:28 crc kubenswrapper[4919]: I0109 14:42:28.974016 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3d0c2080-b1ea-4ff9-ad51-d970cce81d56/galera/0.log" Jan 09 14:42:28 crc kubenswrapper[4919]: I0109 14:42:28.981159 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3d0c2080-b1ea-4ff9-ad51-d970cce81d56/mysql-bootstrap/0.log" Jan 09 14:42:29 crc kubenswrapper[4919]: I0109 14:42:29.172043 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_284d399b-7c07-4e99-9a95-32d600fab162/openstackclient/0.log" Jan 09 14:42:29 crc kubenswrapper[4919]: I0109 14:42:29.256550 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fdp27_9415e6b9-c9a5-4ed6-ab9e-ef42cfa1bbe6/openstack-network-exporter/0.log" Jan 09 14:42:29 crc kubenswrapper[4919]: I0109 14:42:29.476011 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-n9g6d_088a3f18-0aab-4042-b674-752c23ed3ac3/ovn-controller/0.log" Jan 09 14:42:29 crc kubenswrapper[4919]: I0109 14:42:29.617495 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrsng_91789be0-3c6f-46d6-a222-d75d49e63662/ovsdb-server-init/0.log" Jan 09 14:42:29 crc kubenswrapper[4919]: I0109 14:42:29.672874 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_10d389ef-fb74-406c-a1cb-8a591b708726/nova-metadata-metadata/0.log" Jan 09 14:42:29 crc kubenswrapper[4919]: I0109 14:42:29.830171 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrsng_91789be0-3c6f-46d6-a222-d75d49e63662/ovs-vswitchd/0.log" Jan 09 14:42:29 crc kubenswrapper[4919]: I0109 14:42:29.836853 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrsng_91789be0-3c6f-46d6-a222-d75d49e63662/ovsdb-server/0.log" Jan 09 14:42:29 crc kubenswrapper[4919]: I0109 14:42:29.864967 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rrsng_91789be0-3c6f-46d6-a222-d75d49e63662/ovsdb-server-init/0.log" Jan 09 14:42:30 crc kubenswrapper[4919]: I0109 14:42:30.067547 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-8nq44_527824ae-c763-4efc-ba39-1cd36664996f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:30 crc kubenswrapper[4919]: I0109 14:42:30.140031 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68449649-bcc2-41c2-9a6a-a91452a48282/openstack-network-exporter/0.log" Jan 09 14:42:30 crc kubenswrapper[4919]: I0109 14:42:30.249269 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68449649-bcc2-41c2-9a6a-a91452a48282/ovn-northd/0.log" Jan 09 14:42:30 crc kubenswrapper[4919]: I0109 14:42:30.302520 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_80e0f01c-3e7c-456d-ae74-276ef085ff36/openstack-network-exporter/0.log" Jan 09 14:42:30 crc kubenswrapper[4919]: I0109 14:42:30.377012 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_80e0f01c-3e7c-456d-ae74-276ef085ff36/ovsdbserver-nb/0.log" Jan 09 14:42:30 crc kubenswrapper[4919]: I0109 14:42:30.512805 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_62681dab-a75d-4270-bb2f-c8f963838172/openstack-network-exporter/0.log" Jan 09 14:42:30 crc kubenswrapper[4919]: I0109 14:42:30.574519 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_62681dab-a75d-4270-bb2f-c8f963838172/ovsdbserver-sb/0.log" Jan 09 14:42:30 crc kubenswrapper[4919]: I0109 14:42:30.845385 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74bbf9c4b-kjq9x_aafcf4ee-61ee-448a-91d4-d3b215b2c42e/placement-api/0.log" Jan 09 14:42:30 crc kubenswrapper[4919]: I0109 14:42:30.875439 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74bbf9c4b-kjq9x_aafcf4ee-61ee-448a-91d4-d3b215b2c42e/placement-log/0.log" Jan 09 14:42:30 crc kubenswrapper[4919]: I0109 14:42:30.905334 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_196a3f64-983f-4369-93cf-9501a68ee8a4/setup-container/0.log" Jan 09 14:42:31 crc kubenswrapper[4919]: I0109 14:42:31.174529 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_196a3f64-983f-4369-93cf-9501a68ee8a4/setup-container/0.log" Jan 09 14:42:31 crc kubenswrapper[4919]: I0109 14:42:31.194673 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_196a3f64-983f-4369-93cf-9501a68ee8a4/rabbitmq/0.log" Jan 09 14:42:31 crc kubenswrapper[4919]: I0109 14:42:31.243358 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7239a87a-aba2-4367-b1c3-2800f1a130d8/setup-container/0.log" Jan 09 14:42:31 crc kubenswrapper[4919]: I0109 14:42:31.414581 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7239a87a-aba2-4367-b1c3-2800f1a130d8/rabbitmq/0.log" Jan 09 14:42:31 crc kubenswrapper[4919]: I0109 14:42:31.480083 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7239a87a-aba2-4367-b1c3-2800f1a130d8/setup-container/0.log" Jan 09 14:42:31 crc kubenswrapper[4919]: I0109 14:42:31.483484 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jvfp8_781cfeb4-857a-490b-a97e-02bcadab1886/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:32 crc kubenswrapper[4919]: I0109 14:42:32.352912 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-ghk4m_167890d2-4e03-4537-a339-d4efc3b64c54/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:32 crc kubenswrapper[4919]: I0109 14:42:32.391796 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qkcmj_6ff771e7-314f-493f-b5e8-fe2eb503aa52/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:32 crc kubenswrapper[4919]: I0109 14:42:32.577270 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-crhq6_1e8137a4-0169-4f73-b616-6a0554aa426f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:32 crc kubenswrapper[4919]: I0109 14:42:32.653904 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jvwq2_a7fb05e2-9059-4447-8ed5-f125411a7fdc/ssh-known-hosts-edpm-deployment/0.log" Jan 09 14:42:32 crc kubenswrapper[4919]: I0109 14:42:32.879230 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5f95dfdc65-kz6rq_e09e5f52-5a74-4a7c-bd84-079835a21fec/proxy-server/0.log" Jan 09 14:42:32 crc kubenswrapper[4919]: I0109 14:42:32.997079 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7lmg7_b5cc6e72-8cde-4ad1-bab5-0e3c20c11cb7/swift-ring-rebalance/0.log" Jan 09 14:42:33 crc kubenswrapper[4919]: I0109 14:42:33.003628 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5f95dfdc65-kz6rq_e09e5f52-5a74-4a7c-bd84-079835a21fec/proxy-httpd/0.log" Jan 09 14:42:33 crc kubenswrapper[4919]: I0109 14:42:33.164380 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/account-auditor/0.log" Jan 09 14:42:33 crc kubenswrapper[4919]: I0109 14:42:33.263817 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/account-replicator/0.log" Jan 09 14:42:33 crc kubenswrapper[4919]: I0109 14:42:33.278752 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/account-reaper/0.log" Jan 09 14:42:33 crc kubenswrapper[4919]: I0109 14:42:33.362502 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/account-server/0.log" Jan 09 14:42:33 crc kubenswrapper[4919]: I0109 14:42:33.378042 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/container-auditor/0.log" Jan 09 14:42:33 crc kubenswrapper[4919]: I0109 14:42:33.986061 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/container-server/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.029095 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/container-updater/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.032333 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/container-replicator/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.037011 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/object-auditor/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.174438 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/object-expirer/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.225511 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/object-server/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.247330 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/object-updater/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.308096 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/object-replicator/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.374470 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/rsync/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.456640 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f55583f6-0518-4977-89a9-e4f12b0eae89/swift-recon-cron/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.594907 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-k6ck6_1397ace9-1e0e-4acc-b043-3e1f13244746/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.718076 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f53c17d7-be4d-4bcf-aea4-2617abf3d9ea/tempest-tests-tempest-tests-runner/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.845866 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_77aedacd-c1c9-4ee5-836d-69b929d4f842/test-operator-logs-container/0.log" Jan 09 14:42:34 crc kubenswrapper[4919]: I0109 14:42:34.938243 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vv6m5_89e73a14-acf2-4c6b-94de-a8857e0cf22d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 14:42:44 crc kubenswrapper[4919]: I0109 14:42:44.556083 4919 scope.go:117] "RemoveContainer" containerID="6ac756de5c4d0ca7cae34f41a9bbb1d055acfa77953dac0f721f46d3e5f36d8e" Jan 09 14:42:44 crc kubenswrapper[4919]: I0109 14:42:44.588802 4919 scope.go:117] "RemoveContainer" containerID="ddb0a494bc16bb0c71ba5f0f8a0f5a75f9e330368e8b1d481829b29ca3a8bb4a" Jan 09 14:42:44 crc kubenswrapper[4919]: I0109 14:42:44.631801 4919 scope.go:117] "RemoveContainer" containerID="495dcb7ab02b946b8806fb6d39026339b508e6dbfa43d18bf0b3fcd50715cd98" Jan 09 14:42:44 crc kubenswrapper[4919]: I0109 14:42:44.693894 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5d815ce9-0ae0-4cd4-a6e0-80dd85a1fbe8/memcached/0.log" Jan 09 14:42:51 crc kubenswrapper[4919]: I0109 14:42:51.247408 4919 patch_prober.go:28] interesting pod/machine-config-daemon-9m5lv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 14:42:51 crc kubenswrapper[4919]: I0109 14:42:51.248006 4919 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 14:42:51 crc kubenswrapper[4919]: I0109 14:42:51.248067 4919 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" Jan 09 14:42:51 crc kubenswrapper[4919]: I0109 14:42:51.248880 4919 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45"} pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 14:42:51 crc kubenswrapper[4919]: I0109 14:42:51.248936 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerName="machine-config-daemon" containerID="cri-o://bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" gracePeriod=600 Jan 09 14:42:51 crc kubenswrapper[4919]: E0109 14:42:51.367554 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:42:52 crc kubenswrapper[4919]: I0109 14:42:52.311152 4919 generic.go:334] "Generic (PLEG): container finished" podID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" exitCode=0 Jan 09 14:42:52 crc kubenswrapper[4919]: I0109 14:42:52.311568 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerDied","Data":"bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45"} Jan 09 14:42:52 crc kubenswrapper[4919]: I0109 14:42:52.311632 4919 scope.go:117] "RemoveContainer" containerID="456c10831c34b2c9f72f13b4cefc21b45edbed334ca26a0379edf4ef17a9749a" Jan 09 14:42:52 crc kubenswrapper[4919]: I0109 14:42:52.312800 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:42:52 crc kubenswrapper[4919]: E0109 14:42:52.313352 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:43:00 crc kubenswrapper[4919]: I0109 14:43:00.822353 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/util/0.log" Jan 09 14:43:00 crc kubenswrapper[4919]: I0109 14:43:00.965828 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/util/0.log" Jan 09 14:43:01 crc kubenswrapper[4919]: I0109 14:43:01.038253 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/pull/0.log" Jan 09 14:43:01 crc kubenswrapper[4919]: I0109 14:43:01.047579 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/pull/0.log" Jan 09 14:43:01 crc kubenswrapper[4919]: I0109 14:43:01.186788 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/pull/0.log" Jan 09 14:43:01 crc kubenswrapper[4919]: I0109 14:43:01.270604 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/util/0.log" Jan 09 14:43:01 crc kubenswrapper[4919]: I0109 14:43:01.273286 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6b8e21ee89ba424461ec6bbff87d7a975c12e9376e36d6291f2db7c043jx86n_b32f9373-7a38-42ed-8071-92865685e246/extract/0.log" Jan 09 14:43:01 crc kubenswrapper[4919]: I0109 14:43:01.746438 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-m56bk_276f41de-c875-40be-816a-84eb02212fda/manager/0.log" Jan 09 14:43:01 crc kubenswrapper[4919]: I0109 14:43:01.747886 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-h6cp9_d0081380-9d2e-40bb-8cc9-f124d4fbfd25/manager/0.log" Jan 09 14:43:01 crc kubenswrapper[4919]: I0109 14:43:01.897999 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-wxt2z_b46937ef-2f83-4864-b0d4-5464ed82e1b8/manager/0.log" Jan 09 14:43:02 crc kubenswrapper[4919]: I0109 14:43:02.030644 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-s46b7_7716ced4-dfb9-4a5c-936f-65edbf78f5dd/manager/0.log" Jan 09 14:43:02 crc kubenswrapper[4919]: I0109 14:43:02.117802 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-vvsj9_7635e70a-4259-4c43-91b7-eae6fc0d3c12/manager/0.log" Jan 09 14:43:02 crc kubenswrapper[4919]: I0109 14:43:02.210951 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-f2drg_60feaa4f-ca73-4e59-a85f-c17132f8f708/manager/0.log" Jan 09 14:43:02 crc kubenswrapper[4919]: I0109 14:43:02.461795 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-4r7j8_4d08a973-3a9e-4098-95fd-d314d9f4e1af/manager/0.log" Jan 09 14:43:02 crc kubenswrapper[4919]: I0109 14:43:02.536026 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-6s6wp_af1be546-436f-43ef-b748-22860362f61e/manager/0.log" Jan 09 14:43:02 crc kubenswrapper[4919]: I0109 14:43:02.686764 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-r5j45_33efa14f-00b9-49b4-bc2a-5c0c13d60613/manager/0.log" Jan 09 14:43:02 crc kubenswrapper[4919]: I0109 14:43:02.697633 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-cd2dq_53cc8efc-85ec-4ddf-82c5-c1db01fe8120/manager/0.log" Jan 09 14:43:02 crc kubenswrapper[4919]: I0109 14:43:02.918696 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-9bn9t_37ea4d3a-1d7d-47b2-8eee-1a7601c2de24/manager/0.log" Jan 09 14:43:03 crc kubenswrapper[4919]: I0109 14:43:03.009054 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-jl5xm_55fe5bfd-cc48-498b-88f7-789a3048a743/manager/0.log" Jan 09 14:43:03 crc kubenswrapper[4919]: I0109 14:43:03.578249 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-4ppq5_2bf404b6-0f77-4a02-a45a-ad46980755cb/manager/0.log" Jan 09 14:43:03 crc kubenswrapper[4919]: I0109 14:43:03.601059 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-jl878_19ebcfcf-3a6a-4c2c-ab15-2239e08bca09/manager/0.log" Jan 09 14:43:03 crc kubenswrapper[4919]: I0109 14:43:03.759064 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-75f6ff484-ll94k_488f8708-4c49-429f-9697-a00b8fadd486/manager/0.log" Jan 09 14:43:04 crc kubenswrapper[4919]: I0109 14:43:04.190397 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6954755664-nmm8h_2ebbd42e-c3b8-4e1c-b4ee-bf9316669667/operator/0.log" Jan 09 14:43:04 crc kubenswrapper[4919]: I0109 14:43:04.216683 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6rw4t_937fd694-383a-4377-a061-2c3711482e98/registry-server/0.log" Jan 09 14:43:04 crc kubenswrapper[4919]: I0109 14:43:04.472731 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-wnwmg_7c5b2e5b-6474-46f3-861b-aba8d47c714b/manager/0.log" Jan 09 14:43:04 crc kubenswrapper[4919]: I0109 14:43:04.679850 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-8kjrk_58f271ce-d537-4588-ba66-53f08136ee13/manager/0.log" Jan 09 14:43:04 crc kubenswrapper[4919]: I0109 14:43:04.730464 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qg5n7_e9f24ed0-e850-4906-901d-b23777cf500f/operator/0.log" Jan 09 14:43:04 crc kubenswrapper[4919]: I0109 14:43:04.752632 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:43:04 crc kubenswrapper[4919]: E0109 14:43:04.753669 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:43:04 crc kubenswrapper[4919]: I0109 14:43:04.867743 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-47s64_782f359d-9941-4528-851a-4db3673cb439/manager/0.log" Jan 09 14:43:05 crc kubenswrapper[4919]: I0109 14:43:05.084056 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fb94578dd-p4xfn_e77d7646-4198-42f3-ac22-f0974b18a0ab/manager/0.log" Jan 09 14:43:05 crc kubenswrapper[4919]: I0109 14:43:05.109733 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-84x8m_7c1ac56d-4f45-4102-8336-2cec59c44d9d/manager/0.log" Jan 09 14:43:05 crc kubenswrapper[4919]: I0109 14:43:05.111064 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-wzww9_5bd72cd8-70f2-45ef-a451-8468e79eaca9/manager/0.log" Jan 09 14:43:05 crc kubenswrapper[4919]: I0109 14:43:05.258064 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-nk5sx_4f5bfa64-2b7e-4b30-aedc-56cd44f47032/manager/0.log" Jan 09 14:43:17 crc kubenswrapper[4919]: I0109 14:43:17.752578 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:43:17 crc kubenswrapper[4919]: E0109 14:43:17.753364 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:43:24 crc kubenswrapper[4919]: I0109 14:43:24.065753 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-twpss_5b27b30e-8a1e-4c12-ad5a-530c640bf23d/control-plane-machine-set-operator/0.log" Jan 09 14:43:24 crc kubenswrapper[4919]: I0109 14:43:24.245537 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7lrzs_73189faa-e786-4c46-b23e-c9e58d6b0490/kube-rbac-proxy/0.log" Jan 09 14:43:24 crc kubenswrapper[4919]: I0109 14:43:24.259262 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7lrzs_73189faa-e786-4c46-b23e-c9e58d6b0490/machine-api-operator/0.log" Jan 09 14:43:29 crc kubenswrapper[4919]: I0109 14:43:29.752472 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:43:29 crc kubenswrapper[4919]: E0109 14:43:29.753183 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:43:37 crc kubenswrapper[4919]: I0109 14:43:37.067304 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ptg84_64fd850e-9282-4070-8467-aa5b8c498787/cert-manager-controller/0.log" Jan 09 14:43:37 crc kubenswrapper[4919]: I0109 14:43:37.287247 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-29hn2_51952aec-f115-4d09-a7f4-56dcc9f6222c/cert-manager-cainjector/0.log" Jan 09 14:43:37 crc kubenswrapper[4919]: I0109 14:43:37.287848 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-pnfgs_6afdfa72-d547-4051-9c95-fd83fd88ff93/cert-manager-webhook/0.log" Jan 09 14:43:42 crc kubenswrapper[4919]: I0109 14:43:42.751623 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:43:42 crc kubenswrapper[4919]: E0109 14:43:42.752499 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:43:50 crc kubenswrapper[4919]: I0109 14:43:50.971852 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-vh2fh_0964f707-3143-4f9c-a31c-ce8f14e1fd2f/nmstate-console-plugin/0.log" Jan 09 14:43:51 crc kubenswrapper[4919]: I0109 14:43:51.194453 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9wzzm_9dd4fea4-6753-4012-a325-c7065f93a092/nmstate-handler/0.log" Jan 09 14:43:51 crc kubenswrapper[4919]: I0109 14:43:51.305422 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-hr7w5_91ddb4d0-422b-47f1-9279-fd2bef6bcd19/kube-rbac-proxy/0.log" Jan 09 14:43:51 crc kubenswrapper[4919]: I0109 14:43:51.312591 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-hr7w5_91ddb4d0-422b-47f1-9279-fd2bef6bcd19/nmstate-metrics/0.log" Jan 09 14:43:51 crc kubenswrapper[4919]: I0109 14:43:51.451417 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-rqxgb_feaf998d-058f-4630-84eb-a1e5692b6c6b/nmstate-operator/0.log" Jan 09 14:43:51 crc kubenswrapper[4919]: I0109 14:43:51.497770 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-v8957_5be7743f-eb29-453e-a4cb-58c25d8d24bd/nmstate-webhook/0.log" Jan 09 14:43:57 crc kubenswrapper[4919]: I0109 14:43:57.752519 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:43:57 crc kubenswrapper[4919]: E0109 14:43:57.753302 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:44:12 crc kubenswrapper[4919]: I0109 14:44:12.753169 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:44:12 crc kubenswrapper[4919]: E0109 14:44:12.754341 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:44:18 crc kubenswrapper[4919]: I0109 14:44:18.849293 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-grs8k_256aa53e-2a76-437e-ac55-a8766f9e5c00/kube-rbac-proxy/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.033270 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-grs8k_256aa53e-2a76-437e-ac55-a8766f9e5c00/controller/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.222654 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-frr-files/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.365771 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-frr-files/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.404412 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-reloader/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.413307 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-metrics/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.464631 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-reloader/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.589109 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-metrics/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.638257 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-frr-files/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.680495 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-metrics/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.699225 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-reloader/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.872852 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-frr-files/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.892656 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-metrics/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.942352 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/controller/0.log" Jan 09 14:44:19 crc kubenswrapper[4919]: I0109 14:44:19.955830 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/cp-reloader/0.log" Jan 09 14:44:20 crc kubenswrapper[4919]: I0109 14:44:20.263881 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/frr-metrics/0.log" Jan 09 14:44:20 crc kubenswrapper[4919]: I0109 14:44:20.309903 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/kube-rbac-proxy-frr/0.log" Jan 09 14:44:20 crc kubenswrapper[4919]: I0109 14:44:20.316576 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/kube-rbac-proxy/0.log" Jan 09 14:44:20 crc kubenswrapper[4919]: I0109 14:44:20.533350 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-wt2zf_9b452f91-af7c-48e8-b137-3c39a355305a/frr-k8s-webhook-server/0.log" Jan 09 14:44:20 crc kubenswrapper[4919]: I0109 14:44:20.538476 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/reloader/0.log" Jan 09 14:44:20 crc kubenswrapper[4919]: I0109 14:44:20.809912 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bdcf498b5-twbl9_df525dd0-f23f-4348-a4e0-4330e0d9ad91/manager/0.log" Jan 09 14:44:21 crc kubenswrapper[4919]: I0109 14:44:21.019723 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-56d5fdcf86-2jwkb_0cb9da00-2fea-4925-b3ef-c9438a2b5c18/webhook-server/0.log" Jan 09 14:44:21 crc kubenswrapper[4919]: I0109 14:44:21.066664 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6kcvb_33ed1894-533c-4314-b01c-758a5c2eebf8/kube-rbac-proxy/0.log" Jan 09 14:44:21 crc kubenswrapper[4919]: I0109 14:44:21.629224 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwdhc_27cc21f5-c63b-4678-a1f6-6be9c13f32fc/frr/0.log" Jan 09 14:44:21 crc kubenswrapper[4919]: I0109 14:44:21.667488 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6kcvb_33ed1894-533c-4314-b01c-758a5c2eebf8/speaker/0.log" Jan 09 14:44:25 crc kubenswrapper[4919]: I0109 14:44:25.759292 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:44:25 crc kubenswrapper[4919]: E0109 14:44:25.760319 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:44:36 crc kubenswrapper[4919]: I0109 14:44:36.153579 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/util/0.log" Jan 09 14:44:36 crc kubenswrapper[4919]: I0109 14:44:36.415778 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/pull/0.log" Jan 09 14:44:36 crc kubenswrapper[4919]: I0109 14:44:36.451993 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/pull/0.log" Jan 09 14:44:36 crc kubenswrapper[4919]: I0109 14:44:36.471858 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/util/0.log" Jan 09 14:44:36 crc kubenswrapper[4919]: I0109 14:44:36.587630 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/util/0.log" Jan 09 14:44:36 crc kubenswrapper[4919]: I0109 14:44:36.591498 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/pull/0.log" Jan 09 14:44:36 crc kubenswrapper[4919]: I0109 14:44:36.621520 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4h92z5_ce1e614f-1d6c-4d6e-9b2c-42d2fbbdf492/extract/0.log" Jan 09 14:44:36 crc kubenswrapper[4919]: I0109 14:44:36.799387 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/util/0.log" Jan 09 14:44:37 crc kubenswrapper[4919]: I0109 14:44:37.001765 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/util/0.log" Jan 09 14:44:37 crc kubenswrapper[4919]: I0109 14:44:37.032186 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/pull/0.log" Jan 09 14:44:37 crc kubenswrapper[4919]: I0109 14:44:37.087150 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/pull/0.log" Jan 09 14:44:37 crc kubenswrapper[4919]: I0109 14:44:37.220408 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/util/0.log" Jan 09 14:44:37 crc kubenswrapper[4919]: I0109 14:44:37.249667 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/extract/0.log" Jan 09 14:44:37 crc kubenswrapper[4919]: I0109 14:44:37.284742 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8f99q5_b19ed9ae-a65d-4d84-ba74-e2055655c7b8/pull/0.log" Jan 09 14:44:37 crc kubenswrapper[4919]: I0109 14:44:37.623616 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-utilities/0.log" Jan 09 14:44:37 crc kubenswrapper[4919]: I0109 14:44:37.796133 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-utilities/0.log" Jan 09 14:44:37 crc kubenswrapper[4919]: I0109 14:44:37.824727 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-content/0.log" Jan 09 14:44:37 crc kubenswrapper[4919]: I0109 14:44:37.829646 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-content/0.log" Jan 09 14:44:38 crc kubenswrapper[4919]: I0109 14:44:38.061844 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-utilities/0.log" Jan 09 14:44:38 crc kubenswrapper[4919]: I0109 14:44:38.110326 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/extract-content/0.log" Jan 09 14:44:38 crc kubenswrapper[4919]: I0109 14:44:38.328801 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-utilities/0.log" Jan 09 14:44:38 crc kubenswrapper[4919]: I0109 14:44:38.550798 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-content/0.log" Jan 09 14:44:38 crc kubenswrapper[4919]: I0109 14:44:38.575886 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-utilities/0.log" Jan 09 14:44:38 crc kubenswrapper[4919]: I0109 14:44:38.622355 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-content/0.log" Jan 09 14:44:38 crc kubenswrapper[4919]: I0109 14:44:38.685007 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9lhnw_92de8a52-6be3-4b9d-9f02-337282f2cc79/registry-server/0.log" Jan 09 14:44:38 crc kubenswrapper[4919]: I0109 14:44:38.751748 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:44:38 crc kubenswrapper[4919]: E0109 14:44:38.752041 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:44:38 crc kubenswrapper[4919]: I0109 14:44:38.817318 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-content/0.log" Jan 09 14:44:38 crc kubenswrapper[4919]: I0109 14:44:38.818159 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/extract-utilities/0.log" Jan 09 14:44:39 crc kubenswrapper[4919]: I0109 14:44:39.083928 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-46q7s_c1290e54-d4c8-4911-a121-762fffa39a66/marketplace-operator/0.log" Jan 09 14:44:39 crc kubenswrapper[4919]: I0109 14:44:39.291915 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nv246_678068f7-bf03-493b-85f3-b52db3ea6770/registry-server/0.log" Jan 09 14:44:39 crc kubenswrapper[4919]: I0109 14:44:39.297112 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-utilities/0.log" Jan 09 14:44:39 crc kubenswrapper[4919]: I0109 14:44:39.471181 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-content/0.log" Jan 09 14:44:39 crc kubenswrapper[4919]: I0109 14:44:39.490329 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-utilities/0.log" Jan 09 14:44:39 crc kubenswrapper[4919]: I0109 14:44:39.525219 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-content/0.log" Jan 09 14:44:39 crc kubenswrapper[4919]: I0109 14:44:39.655365 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-utilities/0.log" Jan 09 14:44:39 crc kubenswrapper[4919]: I0109 14:44:39.706234 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/extract-content/0.log" Jan 09 14:44:39 crc kubenswrapper[4919]: I0109 14:44:39.869505 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-utilities/0.log" Jan 09 14:44:39 crc kubenswrapper[4919]: I0109 14:44:39.919873 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ktjzh_d97889ab-f1bb-4d3c-bf02-c037c00ae3e6/registry-server/0.log" Jan 09 14:44:40 crc kubenswrapper[4919]: I0109 14:44:40.112264 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-content/0.log" Jan 09 14:44:40 crc kubenswrapper[4919]: I0109 14:44:40.113524 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-content/0.log" Jan 09 14:44:40 crc kubenswrapper[4919]: I0109 14:44:40.118972 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-utilities/0.log" Jan 09 14:44:40 crc kubenswrapper[4919]: I0109 14:44:40.255707 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-utilities/0.log" Jan 09 14:44:40 crc kubenswrapper[4919]: I0109 14:44:40.272772 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/extract-content/0.log" Jan 09 14:44:40 crc kubenswrapper[4919]: I0109 14:44:40.876308 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zp794_03961396-0471-4105-a027-ac6ae244d150/registry-server/0.log" Jan 09 14:44:50 crc kubenswrapper[4919]: I0109 14:44:50.762440 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:44:50 crc kubenswrapper[4919]: E0109 14:44:50.764139 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.177440 4919 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv"] Jan 09 14:45:00 crc kubenswrapper[4919]: E0109 14:45:00.178526 4919 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d76415-2015-484e-96e2-151388085415" containerName="container-00" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.178540 4919 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d76415-2015-484e-96e2-151388085415" containerName="container-00" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.178768 4919 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6d76415-2015-484e-96e2-151388085415" containerName="container-00" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.179606 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.183204 4919 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.185536 4919 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.186344 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv"] Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.218512 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeaa3fa-0433-43f5-a8c9-c94423684a54-config-volume\") pod \"collect-profiles-29466165-smfwv\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.218583 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eeaa3fa-0433-43f5-a8c9-c94423684a54-secret-volume\") pod \"collect-profiles-29466165-smfwv\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.218640 4919 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t8wk\" (UniqueName: \"kubernetes.io/projected/6eeaa3fa-0433-43f5-a8c9-c94423684a54-kube-api-access-4t8wk\") pod \"collect-profiles-29466165-smfwv\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.319839 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeaa3fa-0433-43f5-a8c9-c94423684a54-config-volume\") pod \"collect-profiles-29466165-smfwv\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.320088 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eeaa3fa-0433-43f5-a8c9-c94423684a54-secret-volume\") pod \"collect-profiles-29466165-smfwv\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.320140 4919 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t8wk\" (UniqueName: \"kubernetes.io/projected/6eeaa3fa-0433-43f5-a8c9-c94423684a54-kube-api-access-4t8wk\") pod \"collect-profiles-29466165-smfwv\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.320819 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeaa3fa-0433-43f5-a8c9-c94423684a54-config-volume\") pod \"collect-profiles-29466165-smfwv\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.333308 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eeaa3fa-0433-43f5-a8c9-c94423684a54-secret-volume\") pod \"collect-profiles-29466165-smfwv\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.344130 4919 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t8wk\" (UniqueName: \"kubernetes.io/projected/6eeaa3fa-0433-43f5-a8c9-c94423684a54-kube-api-access-4t8wk\") pod \"collect-profiles-29466165-smfwv\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:00 crc kubenswrapper[4919]: I0109 14:45:00.498751 4919 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:01 crc kubenswrapper[4919]: W0109 14:45:01.013477 4919 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eeaa3fa_0433_43f5_a8c9_c94423684a54.slice/crio-b055e486e4bf9b95199bf71d6329bb5f00261ddd836379330dea34efb6f09eb8 WatchSource:0}: Error finding container b055e486e4bf9b95199bf71d6329bb5f00261ddd836379330dea34efb6f09eb8: Status 404 returned error can't find the container with id b055e486e4bf9b95199bf71d6329bb5f00261ddd836379330dea34efb6f09eb8 Jan 09 14:45:01 crc kubenswrapper[4919]: I0109 14:45:01.023247 4919 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv"] Jan 09 14:45:01 crc kubenswrapper[4919]: I0109 14:45:01.465961 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" event={"ID":"6eeaa3fa-0433-43f5-a8c9-c94423684a54","Type":"ContainerStarted","Data":"78563b0554ecf531af4644c2e9382310d519c872b38ab67d9c89c4d19798e304"} Jan 09 14:45:01 crc kubenswrapper[4919]: I0109 14:45:01.466012 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" event={"ID":"6eeaa3fa-0433-43f5-a8c9-c94423684a54","Type":"ContainerStarted","Data":"b055e486e4bf9b95199bf71d6329bb5f00261ddd836379330dea34efb6f09eb8"} Jan 09 14:45:01 crc kubenswrapper[4919]: I0109 14:45:01.494702 4919 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" podStartSLOduration=1.4946797219999999 podStartE2EDuration="1.494679722s" podCreationTimestamp="2026-01-09 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 14:45:01.489763199 +0000 UTC m=+4481.037602649" watchObservedRunningTime="2026-01-09 14:45:01.494679722 +0000 UTC m=+4481.042519172" Jan 09 14:45:02 crc kubenswrapper[4919]: I0109 14:45:02.481967 4919 generic.go:334] "Generic (PLEG): container finished" podID="6eeaa3fa-0433-43f5-a8c9-c94423684a54" containerID="78563b0554ecf531af4644c2e9382310d519c872b38ab67d9c89c4d19798e304" exitCode=0 Jan 09 14:45:02 crc kubenswrapper[4919]: I0109 14:45:02.482051 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" event={"ID":"6eeaa3fa-0433-43f5-a8c9-c94423684a54","Type":"ContainerDied","Data":"78563b0554ecf531af4644c2e9382310d519c872b38ab67d9c89c4d19798e304"} Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.096358 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.201760 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t8wk\" (UniqueName: \"kubernetes.io/projected/6eeaa3fa-0433-43f5-a8c9-c94423684a54-kube-api-access-4t8wk\") pod \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.202113 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeaa3fa-0433-43f5-a8c9-c94423684a54-config-volume\") pod \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.202155 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eeaa3fa-0433-43f5-a8c9-c94423684a54-secret-volume\") pod \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\" (UID: \"6eeaa3fa-0433-43f5-a8c9-c94423684a54\") " Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.203427 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eeaa3fa-0433-43f5-a8c9-c94423684a54-config-volume" (OuterVolumeSpecName: "config-volume") pod "6eeaa3fa-0433-43f5-a8c9-c94423684a54" (UID: "6eeaa3fa-0433-43f5-a8c9-c94423684a54"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.209323 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eeaa3fa-0433-43f5-a8c9-c94423684a54-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6eeaa3fa-0433-43f5-a8c9-c94423684a54" (UID: "6eeaa3fa-0433-43f5-a8c9-c94423684a54"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.209389 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eeaa3fa-0433-43f5-a8c9-c94423684a54-kube-api-access-4t8wk" (OuterVolumeSpecName: "kube-api-access-4t8wk") pod "6eeaa3fa-0433-43f5-a8c9-c94423684a54" (UID: "6eeaa3fa-0433-43f5-a8c9-c94423684a54"). InnerVolumeSpecName "kube-api-access-4t8wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.304383 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t8wk\" (UniqueName: \"kubernetes.io/projected/6eeaa3fa-0433-43f5-a8c9-c94423684a54-kube-api-access-4t8wk\") on node \"crc\" DevicePath \"\"" Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.304419 4919 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeaa3fa-0433-43f5-a8c9-c94423684a54-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.304428 4919 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eeaa3fa-0433-43f5-a8c9-c94423684a54-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.501450 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" event={"ID":"6eeaa3fa-0433-43f5-a8c9-c94423684a54","Type":"ContainerDied","Data":"b055e486e4bf9b95199bf71d6329bb5f00261ddd836379330dea34efb6f09eb8"} Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.501491 4919 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b055e486e4bf9b95199bf71d6329bb5f00261ddd836379330dea34efb6f09eb8" Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.501505 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466165-smfwv" Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.578976 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k"] Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.598331 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466120-8w44k"] Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.752009 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:45:04 crc kubenswrapper[4919]: E0109 14:45:04.752335 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:45:04 crc kubenswrapper[4919]: I0109 14:45:04.763769 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e33f5fbd-40cf-4172-9bc7-013d8f2aecac" path="/var/lib/kubelet/pods/e33f5fbd-40cf-4172-9bc7-013d8f2aecac/volumes" Jan 09 14:45:15 crc kubenswrapper[4919]: I0109 14:45:15.753290 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:45:15 crc kubenswrapper[4919]: E0109 14:45:15.754246 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:45:27 crc kubenswrapper[4919]: I0109 14:45:27.751871 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:45:27 crc kubenswrapper[4919]: E0109 14:45:27.752644 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:45:38 crc kubenswrapper[4919]: I0109 14:45:38.752947 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:45:38 crc kubenswrapper[4919]: E0109 14:45:38.753725 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:45:44 crc kubenswrapper[4919]: I0109 14:45:44.777682 4919 scope.go:117] "RemoveContainer" containerID="cb7d670e2bd79020760b12cbbbdf3b223680cc4d93b68c85d052721a667cdc8c" Jan 09 14:45:49 crc kubenswrapper[4919]: I0109 14:45:49.752184 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:45:49 crc kubenswrapper[4919]: E0109 14:45:49.753086 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:46:04 crc kubenswrapper[4919]: I0109 14:46:04.752273 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:46:04 crc kubenswrapper[4919]: E0109 14:46:04.752927 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:46:17 crc kubenswrapper[4919]: I0109 14:46:17.752864 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:46:17 crc kubenswrapper[4919]: E0109 14:46:17.753772 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:46:31 crc kubenswrapper[4919]: I0109 14:46:31.752847 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:46:31 crc kubenswrapper[4919]: E0109 14:46:31.753530 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:46:34 crc kubenswrapper[4919]: I0109 14:46:34.340107 4919 generic.go:334] "Generic (PLEG): container finished" podID="bf59c3da-2238-418d-ae83-1c36ed768e3b" containerID="252b0666425dacc2dffbf483c36cc74a69b4afba82c61acd0e3952c4fe376983" exitCode=0 Jan 09 14:46:34 crc kubenswrapper[4919]: I0109 14:46:34.340298 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrd99/must-gather-b5l5g" event={"ID":"bf59c3da-2238-418d-ae83-1c36ed768e3b","Type":"ContainerDied","Data":"252b0666425dacc2dffbf483c36cc74a69b4afba82c61acd0e3952c4fe376983"} Jan 09 14:46:34 crc kubenswrapper[4919]: I0109 14:46:34.340878 4919 scope.go:117] "RemoveContainer" containerID="252b0666425dacc2dffbf483c36cc74a69b4afba82c61acd0e3952c4fe376983" Jan 09 14:46:35 crc kubenswrapper[4919]: I0109 14:46:35.138181 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rrd99_must-gather-b5l5g_bf59c3da-2238-418d-ae83-1c36ed768e3b/gather/0.log" Jan 09 14:46:42 crc kubenswrapper[4919]: I0109 14:46:42.751371 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:46:42 crc kubenswrapper[4919]: E0109 14:46:42.752087 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:46:46 crc kubenswrapper[4919]: I0109 14:46:46.236945 4919 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rrd99/must-gather-b5l5g"] Jan 09 14:46:46 crc kubenswrapper[4919]: I0109 14:46:46.237792 4919 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rrd99/must-gather-b5l5g" podUID="bf59c3da-2238-418d-ae83-1c36ed768e3b" containerName="copy" containerID="cri-o://2540af60afc80c314bbfe6d15513ff1c8fc0a7dd8a4b75ceb6a62cc339f2dbf4" gracePeriod=2 Jan 09 14:46:46 crc kubenswrapper[4919]: I0109 14:46:46.248783 4919 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rrd99/must-gather-b5l5g"] Jan 09 14:46:46 crc kubenswrapper[4919]: I0109 14:46:46.443737 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rrd99_must-gather-b5l5g_bf59c3da-2238-418d-ae83-1c36ed768e3b/copy/0.log" Jan 09 14:46:46 crc kubenswrapper[4919]: I0109 14:46:46.444427 4919 generic.go:334] "Generic (PLEG): container finished" podID="bf59c3da-2238-418d-ae83-1c36ed768e3b" containerID="2540af60afc80c314bbfe6d15513ff1c8fc0a7dd8a4b75ceb6a62cc339f2dbf4" exitCode=143 Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.207160 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rrd99_must-gather-b5l5g_bf59c3da-2238-418d-ae83-1c36ed768e3b/copy/0.log" Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.207944 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/must-gather-b5l5g" Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.373866 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf59c3da-2238-418d-ae83-1c36ed768e3b-must-gather-output\") pod \"bf59c3da-2238-418d-ae83-1c36ed768e3b\" (UID: \"bf59c3da-2238-418d-ae83-1c36ed768e3b\") " Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.374105 4919 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6mp6\" (UniqueName: \"kubernetes.io/projected/bf59c3da-2238-418d-ae83-1c36ed768e3b-kube-api-access-h6mp6\") pod \"bf59c3da-2238-418d-ae83-1c36ed768e3b\" (UID: \"bf59c3da-2238-418d-ae83-1c36ed768e3b\") " Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.379880 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf59c3da-2238-418d-ae83-1c36ed768e3b-kube-api-access-h6mp6" (OuterVolumeSpecName: "kube-api-access-h6mp6") pod "bf59c3da-2238-418d-ae83-1c36ed768e3b" (UID: "bf59c3da-2238-418d-ae83-1c36ed768e3b"). InnerVolumeSpecName "kube-api-access-h6mp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.454041 4919 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rrd99_must-gather-b5l5g_bf59c3da-2238-418d-ae83-1c36ed768e3b/copy/0.log" Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.454449 4919 scope.go:117] "RemoveContainer" containerID="2540af60afc80c314bbfe6d15513ff1c8fc0a7dd8a4b75ceb6a62cc339f2dbf4" Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.454533 4919 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrd99/must-gather-b5l5g" Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.476316 4919 scope.go:117] "RemoveContainer" containerID="252b0666425dacc2dffbf483c36cc74a69b4afba82c61acd0e3952c4fe376983" Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.476911 4919 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6mp6\" (UniqueName: \"kubernetes.io/projected/bf59c3da-2238-418d-ae83-1c36ed768e3b-kube-api-access-h6mp6\") on node \"crc\" DevicePath \"\"" Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.539033 4919 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf59c3da-2238-418d-ae83-1c36ed768e3b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bf59c3da-2238-418d-ae83-1c36ed768e3b" (UID: "bf59c3da-2238-418d-ae83-1c36ed768e3b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 14:46:47 crc kubenswrapper[4919]: I0109 14:46:47.578464 4919 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf59c3da-2238-418d-ae83-1c36ed768e3b-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 09 14:46:48 crc kubenswrapper[4919]: I0109 14:46:48.764259 4919 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf59c3da-2238-418d-ae83-1c36ed768e3b" path="/var/lib/kubelet/pods/bf59c3da-2238-418d-ae83-1c36ed768e3b/volumes" Jan 09 14:46:56 crc kubenswrapper[4919]: I0109 14:46:56.752311 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:46:56 crc kubenswrapper[4919]: E0109 14:46:56.753173 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:47:09 crc kubenswrapper[4919]: I0109 14:47:09.752018 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:47:09 crc kubenswrapper[4919]: E0109 14:47:09.752990 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:47:21 crc kubenswrapper[4919]: I0109 14:47:21.752278 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:47:21 crc kubenswrapper[4919]: E0109 14:47:21.753030 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:47:33 crc kubenswrapper[4919]: I0109 14:47:33.751682 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:47:33 crc kubenswrapper[4919]: E0109 14:47:33.752464 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:47:44 crc kubenswrapper[4919]: I0109 14:47:44.858462 4919 scope.go:117] "RemoveContainer" containerID="1e4f793d7c2f604f59c3a725a1c106a881557eb14d402368612c4c55fcce1390" Jan 09 14:47:48 crc kubenswrapper[4919]: I0109 14:47:48.751312 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:47:48 crc kubenswrapper[4919]: E0109 14:47:48.752005 4919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9m5lv_openshift-machine-config-operator(b842de7d-a43c-4884-a3c4-c3ffa2eabc7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" podUID="b842de7d-a43c-4884-a3c4-c3ffa2eabc7c" Jan 09 14:48:02 crc kubenswrapper[4919]: I0109 14:48:02.751933 4919 scope.go:117] "RemoveContainer" containerID="bddb05f06c401240164224cc53a985e43db02341d9515ee6222c775016a87e45" Jan 09 14:48:03 crc kubenswrapper[4919]: I0109 14:48:03.111676 4919 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9m5lv" event={"ID":"b842de7d-a43c-4884-a3c4-c3ffa2eabc7c","Type":"ContainerStarted","Data":"0e678ee9aad767a35b56bf112aae5a05ad6bffba2c8a831e00eedd0d75310489"} Jan 09 14:48:44 crc kubenswrapper[4919]: I0109 14:48:44.935253 4919 scope.go:117] "RemoveContainer" containerID="d88482858d88751ea2cba1d8d65b3c7b8091258381b386f81dbec331638bfba1"